55
Journal of Electrical and Computer Engineering Design of High Throughput and Cost- Efficient Data Center Networks Guest Editors: Vincenzo Eramo, Xavier Hesselbach-Serra, Yan Luo, and Juan Felipe Botero

Design of High Throughput and Cost- Efficient Data Center

  • Upload
    others

  • View
    1

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Design of High Throughput and Cost- Efficient Data Center

Journal of Electrical and Computer Engineering

Design of High Throughput and Cost-Efficient Data Center Networks

Guest Editors Vincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe Botero

Design of High Throughput and Cost-EfficientData Center Networks

Journal of Electrical and Computer Engineering

Design of High Throughput and Cost-EfficientData Center Networks

Guest Editors Vincenzo Eramo Xavier Hesselbach-SerraYan Luo and Juan Felipe Botero

Copyright copy 2016 Hindawi Publishing Corporation All rights reserved

This is a special issue published in ldquoJournal of Electrical and Computer Engineeringrdquo All articles are open access articles distributed underthe Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium providedthe original work is properly cited

Circuits and Systems

Muhammad Abuelmarsquoatti KSAIshfaq Ahmad USADhamin Al-Khalili CanadaWael M Badawy CanadaIvo Barbi BrazilMartin A Brooke USATian-Sheuan Chang TaiwanM Jamal Deen CanadaAndre Ivanov CanadaWen B Jone USAH Kuntman TurkeyBin-Da Liu Taiwan

Shen-Iuan Liu TaiwanJoatildeo Antonio Martino BrazilPianki Mazumder USASing Kiong Nguang New ZealandShun Ohmi JapanMohamed A Osman USAPing Feng Pai TaiwanMarco Platzner GermanyDhiraj K Pradhan UKGabriel Robins USAMohamad Sawan CanadaRaj Senani India

Gianluca Setti ItalyNicolas Sklavos GreeceAhmed M Soliman EgyptDimitrios Soudris GreeceCharles E Stroud USAEphraim Suhir USAHannu A Tenhunen FinlandGeorge S Tombras GreeceSpyros Tragoudas USAChi Kong Tse Hong KongChin-Long Wey USAFei Yuan Canada

Communications

Sofiegravene Affes CanadaEdward Au ChinaEnzo Baccarelli ItalyStefano Basagni USAJun Bi ChinaReneacute Cumplido MexicoLuca De Nardis ItalyM-G Di Benedetto ItalyJocelyn Fiorina FranceZabih F Ghassemlooy UKK Giridhar India

Amoakoh Gyasi-Agyei GhanaYaohui Jin ChinaPeter Jung GermanyAdnan Kavak TurkeyRajesh Khanna IndiaKiseon Kim Republic of KoreaTho Le-Ngoc CanadaCyril Leung CanadaPetri Maumlhoumlnen GermanyJit S Mandeep MalaysiaMontse Najar Spain

Adam Panagos USASamuel Pierre CanadaJohn N Sahalos GreeceChristian Schlegel CanadaVinod Sharma IndiaIickho Song Republic of KoreaIoannis Tomkos GreeceChien Cheng Tseng TaiwanGeorge Tsoulos GreeceJian-Kang Zhang CanadaM Abdul Matin Brunei Darussalam

Signal Processing

Sos Agaian USAPanajotis Agathoklis CanadaJaakko Astola FinlandAnthony Constantinides UKPaul Dan Cristea RomaniaPetar M Djuric USAIgor Djurović MontenegroKaren Egiazarian FinlandWoon-Seng Gan Singapore

Zabih Ghassemlooy UKMartin Haardt GermanyJiri Jan Czech RepublicS Jensen DenmarkChi Chung Ko SingaporeJames Lam Hong KongRiccardo Leonardi ItalySven Nordholm AustraliaCeacutedric Richard France

William Sandham UKRavi Sankar USAAndreas Spanias USAYannis Stylianou GreeceIoan Tabus FinlandAri J Visa FinlandJar Ferr Yang Taiwan

Contents

Design of HighThroughput and Cost-Efficient Data Center NetworksVincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe BoteroVolume 2016 Article ID 4695185 2 pages

Virtual Networking Performance in OpenStack Platform for Network Function VirtualizationFranco Callegati Walter Cerroni and Chiara ContoliVolume 2016 Article ID 5249421 15 pages

Server Resource Dimensioning and Routing of Service Function Chain in NFV Network ArchitecturesV Eramo A Tosti and E MiucciVolume 2016 Article ID 7139852 12 pages

A Game for Energy-Aware Allocation of Virtualized Network FunctionsRoberto Bruschi Alessandro Carrega and Franco DavoliVolume 2016 Article ID 4067186 10 pages

A Processor-Sharing Scheduling Strategy for NFV NodesGiuseppe Faraci Alfio Lombardo and Giovanni SchembraVolume 2016 Article ID 3583962 10 pages

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 2: Design of High Throughput and Cost- Efficient Data Center

Design of High Throughput and Cost-EfficientData Center Networks

Journal of Electrical and Computer Engineering

Design of High Throughput and Cost-EfficientData Center Networks

Guest Editors Vincenzo Eramo Xavier Hesselbach-SerraYan Luo and Juan Felipe Botero

Copyright copy 2016 Hindawi Publishing Corporation All rights reserved

This is a special issue published in ldquoJournal of Electrical and Computer Engineeringrdquo All articles are open access articles distributed underthe Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium providedthe original work is properly cited

Circuits and Systems

Muhammad Abuelmarsquoatti KSAIshfaq Ahmad USADhamin Al-Khalili CanadaWael M Badawy CanadaIvo Barbi BrazilMartin A Brooke USATian-Sheuan Chang TaiwanM Jamal Deen CanadaAndre Ivanov CanadaWen B Jone USAH Kuntman TurkeyBin-Da Liu Taiwan

Shen-Iuan Liu TaiwanJoatildeo Antonio Martino BrazilPianki Mazumder USASing Kiong Nguang New ZealandShun Ohmi JapanMohamed A Osman USAPing Feng Pai TaiwanMarco Platzner GermanyDhiraj K Pradhan UKGabriel Robins USAMohamad Sawan CanadaRaj Senani India

Gianluca Setti ItalyNicolas Sklavos GreeceAhmed M Soliman EgyptDimitrios Soudris GreeceCharles E Stroud USAEphraim Suhir USAHannu A Tenhunen FinlandGeorge S Tombras GreeceSpyros Tragoudas USAChi Kong Tse Hong KongChin-Long Wey USAFei Yuan Canada

Communications

Sofiegravene Affes CanadaEdward Au ChinaEnzo Baccarelli ItalyStefano Basagni USAJun Bi ChinaReneacute Cumplido MexicoLuca De Nardis ItalyM-G Di Benedetto ItalyJocelyn Fiorina FranceZabih F Ghassemlooy UKK Giridhar India

Amoakoh Gyasi-Agyei GhanaYaohui Jin ChinaPeter Jung GermanyAdnan Kavak TurkeyRajesh Khanna IndiaKiseon Kim Republic of KoreaTho Le-Ngoc CanadaCyril Leung CanadaPetri Maumlhoumlnen GermanyJit S Mandeep MalaysiaMontse Najar Spain

Adam Panagos USASamuel Pierre CanadaJohn N Sahalos GreeceChristian Schlegel CanadaVinod Sharma IndiaIickho Song Republic of KoreaIoannis Tomkos GreeceChien Cheng Tseng TaiwanGeorge Tsoulos GreeceJian-Kang Zhang CanadaM Abdul Matin Brunei Darussalam

Signal Processing

Sos Agaian USAPanajotis Agathoklis CanadaJaakko Astola FinlandAnthony Constantinides UKPaul Dan Cristea RomaniaPetar M Djuric USAIgor Djurović MontenegroKaren Egiazarian FinlandWoon-Seng Gan Singapore

Zabih Ghassemlooy UKMartin Haardt GermanyJiri Jan Czech RepublicS Jensen DenmarkChi Chung Ko SingaporeJames Lam Hong KongRiccardo Leonardi ItalySven Nordholm AustraliaCeacutedric Richard France

William Sandham UKRavi Sankar USAAndreas Spanias USAYannis Stylianou GreeceIoan Tabus FinlandAri J Visa FinlandJar Ferr Yang Taiwan

Contents

Design of HighThroughput and Cost-Efficient Data Center NetworksVincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe BoteroVolume 2016 Article ID 4695185 2 pages

Virtual Networking Performance in OpenStack Platform for Network Function VirtualizationFranco Callegati Walter Cerroni and Chiara ContoliVolume 2016 Article ID 5249421 15 pages

Server Resource Dimensioning and Routing of Service Function Chain in NFV Network ArchitecturesV Eramo A Tosti and E MiucciVolume 2016 Article ID 7139852 12 pages

A Game for Energy-Aware Allocation of Virtualized Network FunctionsRoberto Bruschi Alessandro Carrega and Franco DavoliVolume 2016 Article ID 4067186 10 pages

A Processor-Sharing Scheduling Strategy for NFV NodesGiuseppe Faraci Alfio Lombardo and Giovanni SchembraVolume 2016 Article ID 3583962 10 pages

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 3: Design of High Throughput and Cost- Efficient Data Center

Journal of Electrical and Computer Engineering

Design of High Throughput and Cost-EfficientData Center Networks

Guest Editors Vincenzo Eramo Xavier Hesselbach-SerraYan Luo and Juan Felipe Botero

Copyright copy 2016 Hindawi Publishing Corporation All rights reserved

This is a special issue published in ldquoJournal of Electrical and Computer Engineeringrdquo All articles are open access articles distributed underthe Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium providedthe original work is properly cited

Circuits and Systems

Muhammad Abuelmarsquoatti KSAIshfaq Ahmad USADhamin Al-Khalili CanadaWael M Badawy CanadaIvo Barbi BrazilMartin A Brooke USATian-Sheuan Chang TaiwanM Jamal Deen CanadaAndre Ivanov CanadaWen B Jone USAH Kuntman TurkeyBin-Da Liu Taiwan

Shen-Iuan Liu TaiwanJoatildeo Antonio Martino BrazilPianki Mazumder USASing Kiong Nguang New ZealandShun Ohmi JapanMohamed A Osman USAPing Feng Pai TaiwanMarco Platzner GermanyDhiraj K Pradhan UKGabriel Robins USAMohamad Sawan CanadaRaj Senani India

Gianluca Setti ItalyNicolas Sklavos GreeceAhmed M Soliman EgyptDimitrios Soudris GreeceCharles E Stroud USAEphraim Suhir USAHannu A Tenhunen FinlandGeorge S Tombras GreeceSpyros Tragoudas USAChi Kong Tse Hong KongChin-Long Wey USAFei Yuan Canada

Communications

Sofiegravene Affes CanadaEdward Au ChinaEnzo Baccarelli ItalyStefano Basagni USAJun Bi ChinaReneacute Cumplido MexicoLuca De Nardis ItalyM-G Di Benedetto ItalyJocelyn Fiorina FranceZabih F Ghassemlooy UKK Giridhar India

Amoakoh Gyasi-Agyei GhanaYaohui Jin ChinaPeter Jung GermanyAdnan Kavak TurkeyRajesh Khanna IndiaKiseon Kim Republic of KoreaTho Le-Ngoc CanadaCyril Leung CanadaPetri Maumlhoumlnen GermanyJit S Mandeep MalaysiaMontse Najar Spain

Adam Panagos USASamuel Pierre CanadaJohn N Sahalos GreeceChristian Schlegel CanadaVinod Sharma IndiaIickho Song Republic of KoreaIoannis Tomkos GreeceChien Cheng Tseng TaiwanGeorge Tsoulos GreeceJian-Kang Zhang CanadaM Abdul Matin Brunei Darussalam

Signal Processing

Sos Agaian USAPanajotis Agathoklis CanadaJaakko Astola FinlandAnthony Constantinides UKPaul Dan Cristea RomaniaPetar M Djuric USAIgor Djurović MontenegroKaren Egiazarian FinlandWoon-Seng Gan Singapore

Zabih Ghassemlooy UKMartin Haardt GermanyJiri Jan Czech RepublicS Jensen DenmarkChi Chung Ko SingaporeJames Lam Hong KongRiccardo Leonardi ItalySven Nordholm AustraliaCeacutedric Richard France

William Sandham UKRavi Sankar USAAndreas Spanias USAYannis Stylianou GreeceIoan Tabus FinlandAri J Visa FinlandJar Ferr Yang Taiwan

Contents

Design of HighThroughput and Cost-Efficient Data Center NetworksVincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe BoteroVolume 2016 Article ID 4695185 2 pages

Virtual Networking Performance in OpenStack Platform for Network Function VirtualizationFranco Callegati Walter Cerroni and Chiara ContoliVolume 2016 Article ID 5249421 15 pages

Server Resource Dimensioning and Routing of Service Function Chain in NFV Network ArchitecturesV Eramo A Tosti and E MiucciVolume 2016 Article ID 7139852 12 pages

A Game for Energy-Aware Allocation of Virtualized Network FunctionsRoberto Bruschi Alessandro Carrega and Franco DavoliVolume 2016 Article ID 4067186 10 pages

A Processor-Sharing Scheduling Strategy for NFV NodesGiuseppe Faraci Alfio Lombardo and Giovanni SchembraVolume 2016 Article ID 3583962 10 pages

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 4: Design of High Throughput and Cost- Efficient Data Center

Copyright copy 2016 Hindawi Publishing Corporation All rights reserved

This is a special issue published in ldquoJournal of Electrical and Computer Engineeringrdquo All articles are open access articles distributed underthe Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium providedthe original work is properly cited

Circuits and Systems

Muhammad Abuelmarsquoatti KSAIshfaq Ahmad USADhamin Al-Khalili CanadaWael M Badawy CanadaIvo Barbi BrazilMartin A Brooke USATian-Sheuan Chang TaiwanM Jamal Deen CanadaAndre Ivanov CanadaWen B Jone USAH Kuntman TurkeyBin-Da Liu Taiwan

Shen-Iuan Liu TaiwanJoatildeo Antonio Martino BrazilPianki Mazumder USASing Kiong Nguang New ZealandShun Ohmi JapanMohamed A Osman USAPing Feng Pai TaiwanMarco Platzner GermanyDhiraj K Pradhan UKGabriel Robins USAMohamad Sawan CanadaRaj Senani India

Gianluca Setti ItalyNicolas Sklavos GreeceAhmed M Soliman EgyptDimitrios Soudris GreeceCharles E Stroud USAEphraim Suhir USAHannu A Tenhunen FinlandGeorge S Tombras GreeceSpyros Tragoudas USAChi Kong Tse Hong KongChin-Long Wey USAFei Yuan Canada

Communications

Sofiegravene Affes CanadaEdward Au ChinaEnzo Baccarelli ItalyStefano Basagni USAJun Bi ChinaReneacute Cumplido MexicoLuca De Nardis ItalyM-G Di Benedetto ItalyJocelyn Fiorina FranceZabih F Ghassemlooy UKK Giridhar India

Amoakoh Gyasi-Agyei GhanaYaohui Jin ChinaPeter Jung GermanyAdnan Kavak TurkeyRajesh Khanna IndiaKiseon Kim Republic of KoreaTho Le-Ngoc CanadaCyril Leung CanadaPetri Maumlhoumlnen GermanyJit S Mandeep MalaysiaMontse Najar Spain

Adam Panagos USASamuel Pierre CanadaJohn N Sahalos GreeceChristian Schlegel CanadaVinod Sharma IndiaIickho Song Republic of KoreaIoannis Tomkos GreeceChien Cheng Tseng TaiwanGeorge Tsoulos GreeceJian-Kang Zhang CanadaM Abdul Matin Brunei Darussalam

Signal Processing

Sos Agaian USAPanajotis Agathoklis CanadaJaakko Astola FinlandAnthony Constantinides UKPaul Dan Cristea RomaniaPetar M Djuric USAIgor Djurović MontenegroKaren Egiazarian FinlandWoon-Seng Gan Singapore

Zabih Ghassemlooy UKMartin Haardt GermanyJiri Jan Czech RepublicS Jensen DenmarkChi Chung Ko SingaporeJames Lam Hong KongRiccardo Leonardi ItalySven Nordholm AustraliaCeacutedric Richard France

William Sandham UKRavi Sankar USAAndreas Spanias USAYannis Stylianou GreeceIoan Tabus FinlandAri J Visa FinlandJar Ferr Yang Taiwan

Contents

Design of HighThroughput and Cost-Efficient Data Center NetworksVincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe BoteroVolume 2016 Article ID 4695185 2 pages

Virtual Networking Performance in OpenStack Platform for Network Function VirtualizationFranco Callegati Walter Cerroni and Chiara ContoliVolume 2016 Article ID 5249421 15 pages

Server Resource Dimensioning and Routing of Service Function Chain in NFV Network ArchitecturesV Eramo A Tosti and E MiucciVolume 2016 Article ID 7139852 12 pages

A Game for Energy-Aware Allocation of Virtualized Network FunctionsRoberto Bruschi Alessandro Carrega and Franco DavoliVolume 2016 Article ID 4067186 10 pages

A Processor-Sharing Scheduling Strategy for NFV NodesGiuseppe Faraci Alfio Lombardo and Giovanni SchembraVolume 2016 Article ID 3583962 10 pages

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 5: Design of High Throughput and Cost- Efficient Data Center

Circuits and Systems

Muhammad Abuelmarsquoatti KSAIshfaq Ahmad USADhamin Al-Khalili CanadaWael M Badawy CanadaIvo Barbi BrazilMartin A Brooke USATian-Sheuan Chang TaiwanM Jamal Deen CanadaAndre Ivanov CanadaWen B Jone USAH Kuntman TurkeyBin-Da Liu Taiwan

Shen-Iuan Liu TaiwanJoatildeo Antonio Martino BrazilPianki Mazumder USASing Kiong Nguang New ZealandShun Ohmi JapanMohamed A Osman USAPing Feng Pai TaiwanMarco Platzner GermanyDhiraj K Pradhan UKGabriel Robins USAMohamad Sawan CanadaRaj Senani India

Gianluca Setti ItalyNicolas Sklavos GreeceAhmed M Soliman EgyptDimitrios Soudris GreeceCharles E Stroud USAEphraim Suhir USAHannu A Tenhunen FinlandGeorge S Tombras GreeceSpyros Tragoudas USAChi Kong Tse Hong KongChin-Long Wey USAFei Yuan Canada

Communications

Sofiegravene Affes CanadaEdward Au ChinaEnzo Baccarelli ItalyStefano Basagni USAJun Bi ChinaReneacute Cumplido MexicoLuca De Nardis ItalyM-G Di Benedetto ItalyJocelyn Fiorina FranceZabih F Ghassemlooy UKK Giridhar India

Amoakoh Gyasi-Agyei GhanaYaohui Jin ChinaPeter Jung GermanyAdnan Kavak TurkeyRajesh Khanna IndiaKiseon Kim Republic of KoreaTho Le-Ngoc CanadaCyril Leung CanadaPetri Maumlhoumlnen GermanyJit S Mandeep MalaysiaMontse Najar Spain

Adam Panagos USASamuel Pierre CanadaJohn N Sahalos GreeceChristian Schlegel CanadaVinod Sharma IndiaIickho Song Republic of KoreaIoannis Tomkos GreeceChien Cheng Tseng TaiwanGeorge Tsoulos GreeceJian-Kang Zhang CanadaM Abdul Matin Brunei Darussalam

Signal Processing

Sos Agaian USAPanajotis Agathoklis CanadaJaakko Astola FinlandAnthony Constantinides UKPaul Dan Cristea RomaniaPetar M Djuric USAIgor Djurović MontenegroKaren Egiazarian FinlandWoon-Seng Gan Singapore

Zabih Ghassemlooy UKMartin Haardt GermanyJiri Jan Czech RepublicS Jensen DenmarkChi Chung Ko SingaporeJames Lam Hong KongRiccardo Leonardi ItalySven Nordholm AustraliaCeacutedric Richard France

William Sandham UKRavi Sankar USAAndreas Spanias USAYannis Stylianou GreeceIoan Tabus FinlandAri J Visa FinlandJar Ferr Yang Taiwan

Contents

Design of HighThroughput and Cost-Efficient Data Center NetworksVincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe BoteroVolume 2016 Article ID 4695185 2 pages

Virtual Networking Performance in OpenStack Platform for Network Function VirtualizationFranco Callegati Walter Cerroni and Chiara ContoliVolume 2016 Article ID 5249421 15 pages

Server Resource Dimensioning and Routing of Service Function Chain in NFV Network ArchitecturesV Eramo A Tosti and E MiucciVolume 2016 Article ID 7139852 12 pages

A Game for Energy-Aware Allocation of Virtualized Network FunctionsRoberto Bruschi Alessandro Carrega and Franco DavoliVolume 2016 Article ID 4067186 10 pages

A Processor-Sharing Scheduling Strategy for NFV NodesGiuseppe Faraci Alfio Lombardo and Giovanni SchembraVolume 2016 Article ID 3583962 10 pages

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 6: Design of High Throughput and Cost- Efficient Data Center

Contents

Design of HighThroughput and Cost-Efficient Data Center NetworksVincenzo Eramo Xavier Hesselbach-Serra Yan Luo and Juan Felipe BoteroVolume 2016 Article ID 4695185 2 pages

Virtual Networking Performance in OpenStack Platform for Network Function VirtualizationFranco Callegati Walter Cerroni and Chiara ContoliVolume 2016 Article ID 5249421 15 pages

Server Resource Dimensioning and Routing of Service Function Chain in NFV Network ArchitecturesV Eramo A Tosti and E MiucciVolume 2016 Article ID 7139852 12 pages

A Game for Energy-Aware Allocation of Virtualized Network FunctionsRoberto Bruschi Alessandro Carrega and Franco DavoliVolume 2016 Article ID 4067186 10 pages

A Processor-Sharing Scheduling Strategy for NFV NodesGiuseppe Faraci Alfio Lombardo and Giovanni SchembraVolume 2016 Article ID 3583962 10 pages

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 7: Design of High Throughput and Cost- Efficient Data Center

EditorialDesign of High Throughput and Cost-Efficient Data CenterNetworks

Vincenzo Eramo1 Xavier Hesselbach-Serra2 Yan Luo3 and Juan Felipe Botero4

1Department of Information Engineering Electronics and Telecommunications (DIET) Sapienza University of Rome00184 Rome Italy2Network Engineering Department (ENTEL) Universitat Politecnica de Catalunya 08034 Barcelona Spain3Department of Electronics and Computers Engineering (DECE) University of Massachusetts Lowell Lowell MA 01854 USA4Department of Electronic and Telecomunications Engineering (DETE) Universidad de Antioquia Oficina 19-450Medellın Colombia

Correspondence should be addressed to Vincenzo Eramo vincenzoeramouniroma1it

Received 20 April 2016 Accepted 20 April 2016

Copyright copy 2016 Vincenzo Eramo et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Data centers (DC) are characterized by the sharing ofcompute and storage resources to support Internet servicesToday many companies (Amazon Google Facebook etc)use data centers to offer storage web search and large-computations services with multibillion dollars businessTheservers are interconnected by elements (switches routersinterconnection systems etc) of a network platform that isreferred to as Data Center Network (DCN)

Network Function Virtualization (NFV) technologyintroduced by European Telecommunications StandardsInstitute (ETSI) applies the cloud computing techniques inthe telecommunication field allowing for a virtualization ofthe network functions to be executed on software modulesrunning in data centers Any network service is representedby a Service Function Chain (SFC) that is a set of VNFs to beexecuted according to a given order The running of VNFsneeds the instantiation of VNF instances (VNFI) that ingeneral are software modules executed on virtual machines

The support of NFV needs high performance servers dueto higher requirements by the network services with respectto classical cloud applications

The purpose of this special issue is to study and evaluatenew solution for the support of NFV technology

The special issue consists of four papers whose briefsummaries are listed below

ldquoServer Resource Dimensioning and Routing of ServiceFunction Chain in NFVNetwork Architecturesrdquo by V Eramoet al focuses on the resource dimensioning and SFC routingproblems in NFV architecture The objective of the problemis to minimize the number of SFCs dropped The authorsformulate the optimization problem and due to its NP-hardcomplexity heuristics are proposed for both cases of offlineand online traffic demand

ldquoA Game for Energy-Aware Allocation of VirtualizedNetwork Functionsrdquo by R Bruschi et al presents and evalu-ates an energy-aware game theory based solution for resourceallocation of Virtualized Network Functions (VNFs) withinNFV environments The authors consider each VNF as aplayer of the problem that competes for the physical networknode capacity pool seeking the minimization of individualcost functions The physical network nodes dynamicallyadjust their processing capacity according to the incomingworkload flows by means of an Adaptive Rate strategy thataims at minimizing the product of energy consumption andprocessing delay

ldquoA Processor-Sharing Scheduling Strategy for NFVNodesrdquo by G Faraci et al focuses on the allocation strategiesof processing resources to the virtual machines running theVNF The main contribution of the paper is the definitionof a processor-sharing policy referred to as Network-Aware

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4695185 2 pageshttpdxdoiorg10115520164695185

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 8: Design of High Throughput and Cost- Efficient Data Center

2 Journal of Electrical and Computer Engineering

Round Robin (NARR) The proposed strategy dynamicallychanges the slices of theCPUassigned to eachVNF accordingto the state of the output network interface card queues Inorder to not waste output link bandwidth more process-ing resources are assigned to the VNF whose packets areaddressed towards the least loaded output NIC

ldquoVirtual Networking Performance in OpenStack Plat-form for Network Function Virtualizationrdquo by F Callegatiet al evaluates the performance evaluation of an OpenSource Virtual Infrastructure Manager (VIM) as OpenStackfocusing in particular on packet forwarding performanceissues A set of experiments are presented that refer to anumber of scenarios inspired by the cloud computing andNFV paradigms considering both single- and multitenantscenarios

Vincenzo EramoXavier Hesselbach-Serra

Yan LuoJuan Felipe Botero

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 9: Design of High Throughput and Cost- Efficient Data Center

Research ArticleVirtual Networking Performance in OpenStack Platform forNetwork Function Virtualization

Franco Callegati Walter Cerroni and Chiara Contoli

DEI University of Bologna Via Venezia 52 47521 Cesena Italy

Correspondence should be addressed to Walter Cerroni waltercerroniuniboit

Received 19 October 2015 Revised 19 January 2016 Accepted 30 March 2016

Academic Editor Yan Luo

Copyright copy 2016 Franco Callegati et alThis is an open access article distributed under theCreativeCommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The emerging Network Function Virtualization (NFV) paradigm coupled with the highly flexible and programmatic control ofnetwork devices offered by Software Defined Networking solutions enables unprecedented levels of network virtualization thatwill definitely change the shape of future network architectures where legacy telco central offices will be replaced by cloud datacenters located at the edge On the one hand this software-centric evolution of telecommunications will allow network operators totake advantage of the increased flexibility and reduced deployment costs typical of cloud computing On the other hand it will posea number of challenges in terms of virtual network performance and customer isolationThis paper intends to provide some insightson how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how itcan be used to deploy NFV focusing in particular on packet forwarding performance issues To this purpose a set of experiments ispresented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms considering both single tenantand multitenant scenarios From the results of the evaluation it is possible to highlight potentials and limitations of running NFVon OpenStack

1 Introduction

Despite the original vision of the Internet as a set of net-works interconnected by distributed layer 3 routing nodesnowadays IP datagrams are not simply forwarded to theirfinal destination based on IP header and next-hop informa-tion A number of so-called middle-boxes process IP trafficperforming cross layer tasks such as address translationpacket inspection and filtering QoS management and loadbalancing They represent a significant fraction of networkoperatorsrsquo capital and operational expenses Moreover theyare closed systems and the deployment of new communi-cation services is strongly dependent on the product capa-bilities causing the so-called ldquovendor lock-inrdquo and Internetldquoossificationrdquo phenomena [1] A possible solution to thisproblem is the adoption of virtualized middle-boxes basedon open software and hardware solutions Network virtual-ization brings great advantages in terms of flexible networkmanagement performed at the software level and possiblecoexistence of multiple customers sharing the same physicalinfrastructure (ie multitenancy) Network virtualization

solutions are already widely deployed at different protocollayers includingVirtual Local AreaNetworks (VLANs)mul-tilayer Virtual Private Network (VPN) tunnels over publicwide-area interconnections and Overlay Networks [2]

Today the combination of emerging technologies such asNetwork Function Virtualization (NFV) and Software DefinedNetworking (SDN) promises to bring innovation one stepfurther SDN provides a more flexible and programmaticcontrol of network devices and fosters new forms of vir-tualization that will definitely change the shape of futurenetwork architectures [3] while NFV defines standards todeploy software-based building blocks implementing highlyflexible network service chains capable of adapting to therapidly changing user requirements [4]

As a consequence it is possible to imagine a medium-term evolution of the network architectures where middle-boxes will turn into virtual machines (VMs) implementingnetwork functions within cloud computing infrastructuresand telco central offices will be replaced by data centerslocated at the edge of the network [5ndash7] Network operatorswill take advantage of the increased flexibility and reduced

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 5249421 15 pageshttpdxdoiorg10115520165249421

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 10: Design of High Throughput and Cost- Efficient Data Center

2 Journal of Electrical and Computer Engineering

deployment costs typical of the cloud-based approach pavingthe way to the upcoming software-centric evolution oftelecommunications [8] However a number of challengesmust be dealt with in terms of system integration datacenter management and packet processing performance Forinstance if VLANs are used in the physical switches and inthe virtual LANs within the cloud infrastructure a suitableintegration is necessary and the coexistence of differentIP virtual networks dedicated to multiple tenants must beseamlessly guaranteed with proper isolation

Then a few questions are naturally raised Will cloudcomputing platforms be actually capable of satisfying therequirements of complex communication environments suchas the operators edge networks Will data centers be ableto effectively replace the existing telco infrastructures at theedgeWill virtualized networks provide performance compa-rable to those achievedwith current physical networks orwillthey pose significant limitations Indeed the answer to thisquestionwill be a function of the cloudmanagement platformconsidered In this work the focus is on OpenStack whichis among the state-of-the-art Linux-based virtualization andcloudmanagement tools Developed by the open-source soft-ware community OpenStack implements the Infrastructure-as-a-Service (IaaS) paradigm in a multitenant context[9]

To the best of our knowledge not much work has beenreported about the actual performance limits of networkvirtualization in OpenStack cloud infrastructures under theNFV scenario Some authors assessed the performance ofLinux-based virtual switching [10 11] while others inves-tigated network performance in public cloud services [12]Solutions for low-latency SDN implementation on high-performance cloud platforms have also been developed [13]However none of the above works specifically deals withNFV scenarios on OpenStack platform Although somemechanisms for effectively placing virtual network functionswithin an OpenStack cloud have been presented [14] adetailed analysis of their network performance has not beenprovided yet

This paper aims at providing insights on how the Open-Stack platform implements multitenant network virtual-ization focusing in particular on the performance issuestrying to fill a gap that is starting to get the attentionalso from the OpenStack developer community [15] Thepaper objective is to identify performance bottlenecks in thecloud implementation of the NFV paradigms An ad hocset of experiments were designed to evaluate the OpenStackperformance under critical load conditions in both singletenant and multitenant scenarios The results reported inthis work extend the preliminary assessment published in[16 17]

The paper is structured as follows the network virtual-ization concept in cloud computing infrastructures is furtherelaborated in Section 2 the OpenStack virtual networkarchitecture is illustrated in Section 3 the experimental test-bed that we have deployed to assess its performance ispresented in Section 4 the results obtained under differentscenarios are discussed in Section 5 some conclusions arefinally drawn in Section 6

2 Cloud Network Virtualization

Generally speaking network virtualization is not a new con-cept Virtual LANs Virtual Private Networks and OverlayNetworks are examples of virtualization techniques alreadywidely used in networking mostly to achieve isolation oftraffic flows andor of whole network sections either forsecurity or for functional purposes such as traffic engineeringand performance optimization [2]

Upon considering cloud computing infrastructures theconcept of network virtualization evolves even further Itis not just that some functionalities can be configured inphysical devices to obtain some additional functionality invirtual form In cloud infrastructures whole parts of thenetwork are virtual implemented with software devicesandor functions running within the servers This newldquosoftwarizedrdquo network implementation scenario allows novelnetwork control and management paradigms In particularthe synergies between NFV and SDN offer programmaticcapabilities that allow easily defining and flexibly managingmultiple virtual network slices at levels not achievable before[1]

In cloud networking the typical scenario is a set ofVMs dedicated to a given tenant able to communicate witheach other as if connected to the same Local Area Network(LAN) independently of the physical serverservers they arerunning on The VMs and LAN of different tenants haveto be isolated and should communicate with the outsideworld only through layer 3 routing and filtering devices Fromsuch requirements stem two major issues to be addressedin cloud networking (i) integration of any set of virtualnetworks defined in the data center physical switches with thespecific virtual network technologies adopted by the hostingservers and (ii) isolation among virtual networks that mustbe logically separated because of being dedicated to differentpurposes or different customers Moreover these problemsshould be solved with performance optimization inmind forinstance aiming at keeping VMs with intensive exchange ofdata colocated in the same server keeping local traffic insidethe host and thus reducing the need for external networkresources and minimizing the communication latency

The solution to these issues is usually fully supportedby the VM manager (ie the Hypervisor) running on thehosting servers Layer 3 routing functions can be executed bytaking advantage of lightweight virtualization tools such asLinux containers or network namespaces resulting in isolatedvirtual networks with dedicated network stacks (eg IProuting tables and netfilter flow states) [18] Similarly layer2 switching is typically implemented by means of kernel-level virtual bridgesswitches interconnecting a VMrsquos virtualinterface to a hostrsquos physical interface Moreover the VMsplacing algorithms may be designed to take networkingissues into account thus optimizing the networking in thecloud together with computation effectiveness [19] Finallyit is worth mentioning that whatever network virtualizationtechnology is adopted within a data center it should becompatible with SDN-based implementation of the controlplane (eg OpenFlow) for improved manageability andprogrammability [20]

Journal of Electrical and Computer Engineering 3

External network

Management network

Compute node 1 Storage nodeController node Network node

Internet

p

Cloudcustomers

gCompute node 2

Instancetunnel (data) network

Figure 1 Main components of an OpenStack cloud setup

For the purposes of this work the implementation oflayer 2 connectivity in the cloud environment is of particularrelevance Many Hypervisors running on Linux systemsimplement the LANs inside the servers using Linux Bridgethe native kernel bridging module [21] This solution isstraightforward and is natively integrated with the powerfulLinux packet filtering and traffic conditioning kernel func-tions The overall performance of this solution should be ata reasonable level when the system is not overloaded [22]The Linux Bridge basically works as a transparent bridgewith MAC learning providing the same functionality as astandard Ethernet switch in terms of packet forwarding Butsuch standard behavior is not compatible with SDNand is notflexible enough when aspects such as multitenant traffic iso-lation transparent VM mobility and fine-grained forward-ing programmability are critical The Linux-based bridgingalternative is Open vSwitch (OVS) a software switchingfacility specifically designed for virtualized environments andcapable of reaching kernel-level performance [23] OVS isalso OpenFlow-enabled and therefore fully compatible andintegrated with SDN solutions

3 OpenStack Virtual Network Infrastructure

OpenStack provides cloud managers with a web-based dash-board as well as a powerful and flexible Application Pro-grammable Interface (API) to control a set of physical hostingservers executing different kinds of Hypervisors (in generalOpenStack is designed to manage a number of computershosting application servers these application servers canbe executed by fully fledged VMs lightweight containersor bare-metal hosts in this work we focus on the mostchallenging case of application servers running on VMs) andto manage the required storage facilities and virtual networkinfrastructuresTheOpenStack dashboard also allows instan-tiating computing and networking resources within the data

center infrastructure with a high level of transparency Asillustrated in Figure 1 a typical OpenStack cloud is composedof a number of physical nodes and networks

(i) Controller node managing the cloud platform(ii) Network node hosting the networking services for the

various tenants of the cloud and providing externalconnectivity

(iii) Compute nodes asmany hosts as needed in the clusterto execute the VMs

(iv) Storage nodes to store data and VM images(v) Management network the physical networking infras-

tructure used by the controller node to managethe OpenStack cloud services running on the othernodes

(vi) Instancetunnel network (or data network) the phys-ical network infrastructure connecting the networknode and the compute nodes to deploy virtual tenantnetworks and allow inter-VM traffic exchange andVM connectivity to the cloud networking servicesrunning in the network node

(vii) External network the physical infrastructure enablingconnectivity outside the data center

OpenStack has a component specifically dedicated tonetwork service management this component formerlyknown as Quantum was renamed as Neutron in the Havanarelease Neutron decouples the network abstractions from theactual implementation and provides administrators and userswith a flexible interface for virtual network managementThe Neutron server is centralized and typically runs in thecontroller node It stores all network-related informationand implements the virtual network infrastructure in adistributed and coordinated way This allows Neutron totransparently manage multitenant networks across multiple

4 Journal of Electrical and Computer Engineering

compute nodes and to provide transparent VM mobilitywithin the data center

Neutronrsquos main network abstractions are

(i) network a virtual layer 2 segment(ii) subnet a layer 3 IP address space used in a network(iii) port an attachment point to a network and to one or

more subnets on that network(iv) router a virtual appliance that performs routing

between subnets and address translation(v) DHCP server a virtual appliance in charge of IP

address distribution(vi) security group a set of filtering rules implementing a

cloud-level firewall

A cloud customer wishing to implement a virtual infras-tructure in the cloud is considered an OpenStack tenant andcan use the OpenStack dashboard to instantiate computingand networking resources typically creating a new networkand the necessary subnets optionally spawning the relatedDHCP servers then starting as many VM instances asrequired based on a given set of available images and speci-fying the subnet (or subnets) to which the VM is connectedNeutron takes care of creating a port on each specified subnet(and its underlying network) and of connecting the VM tothat port while the DHCP service on that network (residentin the network node) assigns a fixed IP address to it Othervirtual appliances (eg routers providing global connectivity)can be implemented directly in the cloud platform by meansof containers and network namespaces typically defined inthe network node The different tenant networks are isolatedby means of VLANs and network namespaces whereas thesecurity groups protect the VMs from external attacks orunauthorized accessWhen someVM instances offer servicesthat must be reachable by external users the cloud providerdefines a pool of floating IP addresses on the externalnetwork and configures the network node with VM-specificforwarding rules based on those floating addresses

OpenStack implements the virtual network infrastruc-ture (VNI) exploiting multiple virtual bridges connectingvirtual andor physical interfaces that may reside in differentnetwork namespaces To better understand such a complexsystem a graphical tool was developed to display all thenetwork elements used by OpenStack [24] Two examplesshowing the internal state of a network node connected tothree virtual subnets and a compute node running two VMsare displayed in Figures 2 and 3 respectively

Each node runs OVS-based integration bridge namedbr-int and connected to it an additional OVS bridge foreach data center physical network attached to the nodeSo the network node (Figure 2) includes br-tun for theinstancetunnel network and br-ex for the external networkA compute node (Figure 3) includes br-tun only

Layer 2 virtualization and multitenant isolation on thephysical network can be implemented using either VLANsor layer 2-in-layer 34 tunneling solutions such as VirtualeXtensible LAN (VXLAN) orGeneric Routing Encapsulation(GRE) which allow extending the local virtual networks also

to remote data centers [25] The examples shown in Figures2 and 3 refer to the case of tenant isolation implementedwith GRE tunnels on the instancetunnel network Whatevervirtualization technology is used in the physical networkits virtual networks must be mapped into the VLANs usedinternally by Neutron to achieve isolation This is performedby taking advantage of the programmable features availablein OVS through the insertion of appropriate OpenFlowmapping rules in br-int and br-tun

Virtual bridges are interconnected by means of eithervirtual Ethernet (veth) pairs or patch port pairs consistingof two virtual interfaces that act as the endpoints of a pipeanything entering one endpoint always comes out on theother side

From the networking point of view the creation of a newVM instance involves the following steps

(i) The OpenStack scheduler component running in thecontroller node chooses the compute node that willhost the VM

(ii) A tap interface is created for each VM networkinterface to connect it to the Linux kernel

(iii) A Linux Bridge dedicated to each VM network inter-face is created (in Figure 3 two of them are shown)and the corresponding tap interface is attached to it

(iv) A veth pair connecting the new Linux Bridge to theintegration bridge is created

The veth pair clearly emulates the Ethernet cable that wouldconnect the two bridges in real life Nonetheless why thenew Linux Bridge is needed is not intuitive as the VMrsquos tapinterface could be directly attached to br-int In short thereason is that the antispoofing rules currently implementedby Neutron adopt the native Linux kernel filtering functions(netfilter) applied to bridged tap interfaces which work onlyunder Linux Bridges Therefore the Linux Bridge is requiredas an intermediate element to interconnect the VM to theintegration bridgeThe security rules are applied to the LinuxBridge on the tap interface that connects the kernel-levelbridge to the virtual Ethernet port of theVM running in user-space

4 Experimental Setup

The previous section makes the complexity of the OpenStackvirtual network infrastructure clear To understand optimaldesign strategies in terms of network performance it is ofgreat importance to analyze it under critical traffic conditionsand assess the maximum sustainable packet rate underdifferent application scenarios The goal is to isolate as muchas possible the level of performance of the main OpenStacknetwork components and determine where the bottlenecksare located speculating on possible improvements To thispurpose a test-bed including a controller node one or twocompute nodes (depending on the specific experiment) anda network node was deployed and used to obtain the resultspresented in the following In the test-bed each compute noderuns KVM the native Linux VMHypervisor and is equipped

Journal of Electrical and Computer Engineering 5

Physical interface

OVS bridge

l2tp tunnel

VLAN alias

Linux Bridge

TUNTAP

OVS-internal

GRE tunnel

Patch port

veth pair

LinBr mgmgt iface

Other OVS ports

External network

External router interface

Subnet 1 router interface

Subnet 1 DHCP server

Subnet 2 router interface

Subnet 2 DHCP server

Subnet 3 router interface

Subnet 3 DHCP server

Managementnetwork

Instancetunnelnetwork

qg-9326d793-0f

qr-dcaace0c-ab

tapf9c1bdb7-55

qr-6df34d1e-10

tapf2027a28-f0

qr-decba8c2-53

tapc6e53a07-fe

eth2

eth0

br-ex(bridge)

phy-br-ex

int-br-ex

br-int(bridge)

patch-tun

patch-int

br-tun(bridge)

gre-0a7d0001

eth1

br-ex

br-int

br-tun

Figure 2 Network elements in an OpenStack network node connected to three virtual subnets Three OVS bridges (red boxes) areinterconnected by patch port pairs (orange boxes) br-ex is directly attached to the external network physical interface (eth0) whereas GREtunnel is established on the instancetunnel network physical interface (eth1) to connect br-tun with its counterpart in the compute node Anumber of br-int ports (light-green boxes) are connected to four virtual router interfaces and three DHCP servers An additional physicalinterface (eth2) connects the network node to the management network

with 8GB of RAM and a quad-core processor enabled tohyperthreading resulting in 8 virtual CPUs

The test-bed was configured to implement three possibleuse cases

(1) A typical single tenant cloud computing scenario(2) A multitenant NFV scenario with dedicated network

functions(3) A multitenant NFV scenario with shared network

functions

For each use case multiple experiments were executed asreported in the following In the various experiments typ-ically a traffic source sends packets at increasing rate toa destination that measures the received packet rate andthroughput To this purpose the RUDE amp CRUDE tool wasused for both traffic generation and measurement [26]

In some cases the Iperf3 tool was also added to generatebackground traffic at a fixed data rate [27] All physicalinterfaces involved in the experiments were Gigabit Ethernetnetwork cards

41 Single Tenant Cloud Computing Scenario This is thetypical configuration where a single tenant runs one ormultiple VMs that exchange traffic with one another inthe cloud or with an external host as shown in Figure 4This is a rather trivial case of limited general interest butis useful to assess some basic concepts and pave the wayto the deeper analysis developed in the second part of thissection In the experiments reported asmentioned above thevirtualization Hypervisor was always KVM A scenario withOpenStack running the cloud environment and a scenariowithout OpenStack were considered to assess some general

6 Journal of Electrical and Computer Engineering

tap03b80be1-55 tapfe856de0-44

qbr03b80be1-55 qbr03b80be1-55(bridge) qbrfe856de0-44(bridge) qbrfe856de0-44

qvb03b80be1-55 qvbfe856de0-44

qvo03b80be1-55 qvofe856de0-44

br-int(bridge) br-int

patch-tun

patch-int

br-tun(bridge) br-tun

gre-0a7d0002

eth3 eth0

Managementnetwork

Instancetunnelnetwork

VM1 interface VM2 interface

Figure 3 Network elements in an OpenStack compute node running two VMs Two Linux Bridges (blue boxes) are attached to the VM tapinterfaces (green boxes) and connected by virtual Ethernet pairs (light-blue boxes) to br-int

comparison and allow a first isolation of the performancedegradation due to the individual building blocks in par-ticular Linux Bridge and OVS The experiments report thefollowing cases

(1) OpenStack scenario it adopts the standardOpenStackcloud platform as described in the previous sectionwith two VMs respectively acting as sender andreceiver In particular the following setups weretested

(11) A single compute node executing two colocatedVMs

(12) Two distinct compute nodes each executing aVM

(2) Non-OpenStack scenario it adopts physical hostsrunning Linux-Ubuntu server and KVMHypervisorusing either OVS or Linux Bridge as a virtual switchThe following setups were tested

(21) One physical host executing two colocatedVMs acting as sender and receiver and directlyconnected to the same Linux Bridge

(22) The same setup as the previous one but withOVS bridge instead of a Linux Bridge

(23) Two physical hosts one executing the senderVM connected to an internal OVS and the othernatively acting as the receiver

Journal of Electrical and Computer Engineering 7

Single tenant

Customer VM

Virtual switch

External host

Figure 4 Reference logical architecture of a single tenant virtualinfrastructure with 5 hosts 4 hosts are implemented as VMs inthe cloud and are interconnected via the OpenStack layer 2 virtualinfrastructure the 5th host is implemented by a physical machineplaced outside the cloud but still connected to the same logical LAN

Tenant 2

Virtualrouter

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

Figure 5 Multitenant NFV scenario with dedicated network func-tions tested on the OpenStack platform

42 Multitenant NFV Scenario with Dedicated Network Func-tions Themultitenant scenariowewant to analyze is inspiredby a simple NFV case study as illustrated in Figure 5 eachtenantrsquos service chain consists of a customer-controlled VMfollowed by a dedicated deep packet inspection (DPI) virtualappliance and a conventional gateway (router) connecting thecustomer LAN to the public Internet The DPI is deployedby the service operator as a separate VM with two networkinterfaces running a traffic monitoring application based onthe nDPI library [28] It is assumed that the DPI analyzesthe traffic profile of the customers (source and destination IPaddresses and ports application protocol etc) to guaranteethe matching with the customer service level agreement(SLA) a practice that is rather common among Internetservice providers to enforce network security and trafficpolicing The virtualization approach executing the DPI ina VM makes it possible to easily configure and adapt theinspection function to the specific tenant characteristics Forthis reason every tenant has its own DPI with dedicated con-figuration On the other hand the gateway has to implementa standard functionality and is shared among customers Itis implemented as a virtual router for packet forwarding andNAT operations

The implementation of the test scenarios has been donefollowing the OpenStack architecture The compute nodesof the cluster run the VMs while the network node runsthe virtual router within a dedicated network namespace Alllayer 2 connections are implemented by a virtual switch (withproper VLAN isolation) distributed in both the compute andnetwork nodes Figure 6 shows the view provided by theOpenStack dashboard in the case of 4 tenants simultaneouslyactive which is the one considered for the numerical resultspresented in the following The choice of 4 tenants was madeto provide meaningful results with an acceptable degree ofcomplexity without lack of generality As results show this isenough to put the hardware resources of the compute nodeunder stress and therefore evaluate performance limits andcritical issues

It is very important to outline that the VM setup shownin Figure 5 is not commonly seen in a traditional cloudcomputing environment The VMs usually behave as singlehosts connected as endpoints to one or more virtual net-works with one single network interface andnopass-throughforwarding duties In NFV the virtual network functions(VNFs) often perform actions that require packet forwardingNetworkAddress Translators (NATs)DeepPacket Inspectors(DPIs) and so forth all belong to this category If suchVNFs are hosted in VMs the result is that VMs in theOpenStack infrastructure must be allowed to perform packetforwarding which goes against the typical rules implementedfor security reasons in OpenStack For instance when anew VM is instantiated it is attached to a Linux Bridge towhich filtering rules are applied with the goal of avoidingthat the VM sends packet with MAC and IP addressesthat are not the ones allocated to the VM itself Clearlythis is an antispoofing rule that makes perfect sense in anormal networking environment but impairs the forwardingof packets originated by another VM as is the case of the NFVscenario In the scenario considered here it was thereforenecessary to permanently modify the filtering rules in theLinux Bridges by allowing within each tenant slice packetscoming from or directed to the customer VMrsquos IP address topass through the Linux Bridges attached to the DPI virtualappliance Similarly the virtual router is usually connectedjust to one LAN Therefore its NAT function is configuredfor a single pool of addresses This was also modified andadapted to serve the whole set of internal networks used inthe multitenant setup

43 Multitenant NFV Scenario with Shared Network Func-tions We finally extend our analysis to a set of multitenantscenarios assuming different levels of shared VNFs as illus-trated in Figure 7 We start with a single VNF that is thevirtual router connecting all tenants to the external network(Figure 7(a)) Then we progressively add a shared DPI(Figure 7(b)) a shared firewallNAT function (Figure 7(c))and a shared traffic shaper (Figure 7(d))The rationale behindthis last group of setups is to evaluate how NFV deploymenton top of an OpenStack compute node performs under arealistic multitenant scenario where traffic flows must beprocessed by a chain of multiple VNFsThe complexity of the

8 Journal of Electrical and Computer Engineering

1003024

1014024

1006024

1015024

1005024

1013024

1016024

1004024

102500024

DPInet3

InVMnet4

DPInet6

InVMnet5

DPInet5

InVMnet3

InVMnet6

DPInet4

pub

Figure 6 The OpenStack dashboard shows the tenants virtual networks (slices) Each slice includes VM connected to an internal network(InVMnet119894) and a second VM performing DPI and packet forwarding between InVMnet119894 and DPInet119894 Connectivity with the public Internetis provided for all by the virtual router in the bottom-left corner

virtual network path inside the compute node for the VNFchaining of Figure 7(d) is displayed in Figure 8 The peculiarnature of NFV traffic flows is clearly shown in the figurewhere packets are being forwarded multiple times across br-int as they enter and exit the multiple VNFs running in thecompute node

5 Numerical Results

51 Benchmark Performance Before presenting and dis-cussing the performance of the study scenarios describedabove it is important to set some benchmark as a referencefor comparison This was done by considering a back-to-back (B2B) connection between two physical hosts with the

same hardware configuration used in the cluster of the cloudplatform

The former host acts as traffic generator while the latteracts as traffic sink The aim is to verify and assess themaximum throughput and sustainable packet rate of thehardware platform used for the experiments Packet flowsranging from 103 to 105 packets per second (pps) for both64- and 1500-byte IP packet sizes were generated

For 1500-byte packets the throughput saturates to about970Mbps at 80Kpps Given that the measurement does notconsider the Ethernet overhead this limit is clearly very closeto the 1 Gbps which is the physical limit of the Ethernetinterface For 64-byte packets the results are different sincethe maximum measured throughput is about 150MbpsTherefore the limiting factor is not the Ethernet bandwidth

Journal of Electrical and Computer Engineering 9

Virtualrouter

Tenant 2

Tenant 1

Customer VM

middot middot middot

Tenant N

(a) Single VNF

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(b) Two VNFsrsquo chaining

FirewallNAT

Virtualrouter

Tenant 2

Tenant 1

Customer VM DPI

middot middot middot

Tenant N

(c) Three VNFsrsquo chaining

FirewallNAT

Trafficshaper

Virtualrouter

Tenant 2

Tenant 1

Customer VMDPI

middot middot middot

Tenant N

(d) Four VNFsrsquo chaining

Figure 7 Multitenant NFV scenario with shared network functions tested on the OpenStack platform

but the maximum sustainable packet processing rate of thecomputer node These results are shown in Figure 9

This latter limitation related to the processing capabilitiesof the hosts is not very relevant to the scopes of this workIndeed it is always possible in a real operation environmentto deploy more powerful and better dimensioned hardwareThis was not possible in this set of experiments where thecloud cluster was an existing research infrastructure whichcould not be modified at will Nonetheless the objectivehere is to understand the limitations that emerge as aconsequence of the networking architecture resulting fromthe deployment of the VNFs in the cloud and not of thespecific hardware configuration For these reasons as well asfor the sake of brevity the numerical results presented in thefollowingmostly focus on the case of 1500-byte packet lengthwhich will stress the network more than the hosts in terms ofperformance

52 Single Tenant Cloud Computing Scenario The first seriesof results is related to the single tenant scenario described inSection 41 Figure 10 shows the comparison of OpenStack

setups (11) and (12) with the B2B case The figure showsthat the different networking configurations play a crucialrole in performance Setup (11) with the two VMs colocatedin the same compute node clearly is more demanding sincethe compute node has to process the workload of all thecomponents shown in Figure 3 that is packet generation andreception in two VMs and layer 2 switching in two LinuxBridges and two OVS bridges (as a matter of fact the packetsare both outgoing and incoming at the same time within thesame physical machine) The performance starts deviatingfrom the B2B case at around 20Kpps with a saturating effectstarting at 30Kpps This is the maximum packet processingcapability of the compute node regardless of the physicalnetworking capacity which is not fully exploited in thisparticular scenario where the traffic flow does not leave thephysical host Setup (12) splits the workload over two phys-ical machines and the benefit is evident The performance isalmost ideal with a very little penalty due to the virtualizationoverhead

These very simple experiments lead to an importantconclusion that motivates the more complex experiments

10 Journal of Electrical and Computer Engineering

VMinterface

tap4345870b-63

qbr4345870b-63(bridge)

qbr4345870b-63

qvb4345870b-63

qvo4345870b-63

DPIinterface 1tap77f8a413-49

qbr77f8a413-49(bridge)

qbr77f8a413-49

qvb77f8a413-49

qvo77f8a413-49

DPIinterface 2tapf0b115c8-48

qbrf0b115c8-48(bridge)

qbrf0b115c8-48

qvbf0b115c8-48

qvof0b115c8-48

FWNATinterface 1tap17e04cf5-94

qbr17e04cf5-94(bridge)

qbr17e04cf5-94

qvb17e04cf5-94

qvo17e04cf5-94

FWNATinterface 2tap7e8dfe63-c6

qbr7e8dfe63-c6(bridge)

qbr7e8dfe63-c6

qvb7e8dfe63-c6

qvo7e8dfe63-c6

Tr shaperinterface 1tapaf24b66d-15

qbraf24b66d-15(bridge)

qbraf24b66d-15

qvbaf24b66d-15

qvoaf24b66d-15

br-int(bridge)br-int

patch-tun

patch-int

br-tunbr-tun(bridge)

eth3 eth0

gre-0a7d0005gre-0a7d0002gre-0a7d0004

Tr shaperinterface 2tap6b9d95d4-95

qbr6b9d95d4-95(bridge)

qbr6b9d95d4-95

qvb6b9d95d4-95

qvo6b9d95d4-95

Figure 8 A view of the OpenStack compute node with the tenant VM and the VNFs installed including the building blocks of the virtualnetwork infrastructure The red dashed line shows the path followed by the packets traversing the VNF chain displayed in Figure 7(d)

0

200

400

600

800

1000

100

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

Ideal 1500 bytesB2B 1500 bytesB2B 64 bytes

0 10 20 30 40 50 60 70 80 90

Figure 9Throughput versus generated packet rate in the B2B setupfor 64- and 1500-byte packets Comparison with ideal 1500-bytepacket throughput

that follow the standard OpenStack virtual network imple-mentation can show significant performance limitations Forthis reason the first objective was to investigate where thepossible bottleneck is by evaluating the performance of thevirtual network components in isolation This cannot bedone with OpenStack in action therefore ad hoc virtualnetworking scenarios were implemented deploying just parts

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2B2 VMs in 2 compute nodes2 VMs in 1 compute node

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 10 Received versus generated packet rate in the OpenStackscenario setups (11) and (12) with 1500-byte packets

of the typicalOpenStack infrastructureThese are calledNon-OpenStack scenarios in the following

Setups (21) and (22) compare Linux Bridge OVS andB2B as shown in Figure 11 The graphs show interesting andimportant results that can be summarized as follows

(i) The introduction of some virtual network component(thus introducing the processing load of the physical

Journal of Electrical and Computer Engineering 11Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

B2B2 VMs with OVS2 VMs with LB

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 11 Received versus generated packet rate in the Non-OpenStack scenario setups (21) and (22) with 1500-byte packets

hosts in the equation) is always a cause of perfor-mance degradation but with very different degrees ofmagnitude depending on the virtual network compo-nent

(ii) OVS introduces a rather limited performance degra-dation at very high packet rate with a loss of somepercent

(iii) Linux Bridge introduces a significant performancedegradation starting well before the OVS case andleading to a loss in throughput as high as 50

The conclusion of these experiments is that the presence ofadditional Linux Bridges in the compute nodes is one of themain reasons for the OpenStack performance degradationResults obtained from testing setup (23) are displayed inFigure 12 confirming that with OVS it is possible to reachperformance comparable with the baseline

53 Multitenant NFV Scenario with Dedicated Network Func-tions The second series of experiments was performed withreference to the multitenant NFV scenario with dedicatednetwork functions described in Section 42 The case studyconsiders that different numbers of tenants are hosted in thesame compute node sending data to a destination outside theLAN therefore beyond the virtual gateway Figure 13 showsthe packet rate actually received at the destination for eachtenant for different numbers of simultaneously active tenantswith 1500-byte IP packet size In all cases the tenants generatethe same amount of traffic resulting in as many overlappingcurves as the number of active tenants All curves growlinearly as long as the generated traffic is sustainable andthen they saturate The saturation is caused by the physicalbandwidth limit imposed by the Gigabit Ethernet interfacesinvolved in the data transfer In fact the curves become flatas soon as the packet rate reaches about 80Kpps for 1 tenantabout 40Kpps for 2 tenants about 27Kpps for 3 tenants and

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

B2BOVS with sender VM only

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

Figure 12 Received versus generated packet rate in the Non-OpenStack scenario setup (23) with 1500-byte packets

Traffi

c rec

eive

d pe

r ten

ant (

Kpps

)

Traffic generated per tenant (Kpps)

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Figure 13 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 1500-byte IP packet size

about 20Kpps for 4 tenants that is when the total packet rateis slightly more than 80Kpps corresponding to 1 Gbps

In this case it is worth investigating what happens forsmall packets therefore putting more pressure on the pro-cessing capabilities of the compute node Figure 14 reportsthe 64-byte packet size case As discussed previously inthis case the performance saturation is not caused by thephysical bandwidth limit but by the inability of the hardwareplatform to cope with the packet processing workload (in factthe single compute node has to process the workload of allthe components involved including packet generation andDPI in the VMs of each tenant as well as layer 2 packet

12 Journal of Electrical and Computer EngineeringTr

affic r

ecei

ved

per t

enan

t (Kp

ps)

Traffic generated per tenant (Kpps)1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

70

80

100

90

Single tenant2 tenants T12 tenants T23 tenants T13 tenants T2

3 tenants T34 tenants T14 tenants T24 tenants T34 tenants T4

Figure 14 Received versus generated packet rate for each tenant (T1T2 T3 and T4) for different numbers of active tenants with 64-byteIP packet size

processing and switching in three Linux Bridges per tenantand two OVS bridges) As could be easily expected fromthe results presented in Figure 9 the virtual network is notable to use the whole physical capacity Even in the case ofjust one tenant a total bit rate of about 77Mbps well below1Gbps is measured Moreover this penalty increases with thenumber of tenants (ie with the complexity of the virtualsystem) With two tenants the curve saturates at a total ofapproximately 150Kpps (75 times 2) with three tenants at a totalof approximately 135Kpps (45 times 3) and with four tenants ata total of approximately 120Kpps (30 times 4)This is to say thatan increase of one unit in the number of tenants results in adecrease of about 10 in the usable overall network capacityand in a similar penalty per tenant

Given the results of the previous section it is likelythat the Linux Bridges are responsible for most of thisperformance degradation In Figure 15 a comparison is pre-sented between the total throughput obtained under normalOpenStack operations and the corresponding total through-put measured in a custom configuration where the LinuxBridges attached to each VM are bypassed To implement thelatter scenario the OpenStack virtual network configurationrunning in the compute node was modified by connectingeach VMrsquos tap interface directly to the OVS integrationbridge The curves show that the presence of Linux Bridgesin normal OpenStack mode is indeed causing performancedegradation especially when the workload is high (ie with4 tenants) It is interesting to note also that the penalty relatedto the number of tenants is mitigated by the bypass but notfully solved

54 Multitenant NFV Scenario with Shared Network Func-tions The third series of experiments was performed with

Tota

l thr

ough

put r

ecei

ved

(Mbp

s)

Total traffic generated (Kpps)

3 tenants (LB bypass)4 tenants (LB bypass)2 tenants

3 tenants4 tenants

0

10

20

30

40

50

60

70

80

100

90

0 50 100 150 200 250 300 350 400

Figure 15 Total throughput measured versus total packet rategenerated by 2 to 4 tenants for 64-byte packet size Comparisonbetween normal OpenStack mode and Linux Bridge bypass with 3and 4 tenants

Traffi

c rec

eive

d (K

pps)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 16 Received versus generated packet rate for one tenant(T1) when four tenants are active with 1500-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

reference to the multitenant NFV scenario with shared net-work functions described in Section 43 In each experimentfour tenants are equally generating increasing amounts oftraffic ranging from 1 to 100Kpps Figures 16 and 17 show thepacket rate actually received at the destination from tenantT1 as a function of the packet rate generated by T1 fordifferent levels of VNF chaining with 1500- and 64-byteIP packet size respectively The measurements demonstratethat for the 1500-byte case adding a single sharedVNF (evenone that executes heavy packet processing such as the DPI)

Journal of Electrical and Computer Engineering 13Tr

affic r

ecei

ved

(Kpp

s)

Traffic generated (Kpps)

T1-VR-DESTT1-DPI-VR-DEST

T1-DPI-FW-VR-DESTT1-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

Figure 17 Received versus generated packet rate for one tenant(T1) when four tenants are active with 64-byte IP packet size anddifferent levels of VNF chaining as per Figure 7 DPI deep packetinspection FW firewallNAT TS traffic shaper VR virtual routerDEST destination

does not significantly impact the forwarding performanceof the OpenStack compute node for a packet rate below50Kpps (note that the physical capacity is saturated by theflows simultaneously generated from four tenants at around20Kpps similarly to what happens in the dedicated VNFcase of Figure 13) Then the throughput slowly degrades Incontrast when 64-byte packets are generated even a singleVNF can cause heavy performance losses above 25Kppswhen the packet rate reaches the sustainability limit of theforwarding capacity of our compute node Independently ofthe packet size adding another VNF with heavy packet pro-cessing (the firewallNAT is configuredwith 40000matchingrules) causes the performance to rapidly degrade This isconfirmedwhen a fourthVNF is added to the chain althoughfor the 1500-byte case the measured packet rate is the onethat saturates the maximum bandwidth made available bythe traffic shaper Very similar performance which we do notshow here was measured also for the other three tenants

To further investigate the effect of VNF chaining weconsidered the case when traffic generated by tenant T1 is notsubject to VNF chaining (as in Figure 7(a)) whereas flowsoriginated from T2 T3 and T4 are processed by four VNFs(as in Figure 7(d)) The results presented in Figures 18 and19 demonstrate that owing to the traffic shaping functionapplied to the other tenants the throughput of T1 can reachvalues not very far from the case when it is the only activetenant especially for packet rates below 35KppsTherefore asmart choice of the VNF chaining and a careful planning ofthe cloud platform resources could improve the performanceof a given class of priority customers In the same situationwemeasured the TCP throughput achievable by the four tenantsAs shown in Figure 20 we can reach the same conclusions asin the UDP case

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

100

200

300

400

500

600

700

800

900

Figure 18 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse the VNFchain of Figure 7(d) with 1500-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

Thro

ughp

ut re

ceiv

ed (M

bps)

Traffic generated (Kpps)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

1000 10 20 30 40 50 60 70 80 90

0

10

20

30

40

50

60

Figure 19 Received throughput versus generated packet rate foreach tenant (T1 T2 T3 and T4) when T1 does not traverse theVNF chain of Figure 7(d) with 64-byte IP packet size Comparisonwith the single tenant case DPI deep packet inspection FW fire-wallNAT TS traffic shaper VR virtual router DEST destination

6 Conclusion

Network Function Virtualization will completely reshape theapproach of telco operators to provide existing as well asnovel network services taking advantage of the increased

14 Journal of Electrical and Computer Engineering

0

200

400

600

800

1000

TCP

thro

ughp

ut re

ceiv

ed (M

bps)

Time (s)

T1-VR-DEST single tenantT1-VR-DESTT2-DPI-FW-TS-VR-DESTT3-DPI-FW-TS-VR-DESTT4-DPI-FW-TS-VR-DEST

0 20 40 60 80 100 120

Figure 20 Received TCP throughput for each tenant (T1 T2 T3and T4) when T1 does not traverse the VNF chain of Figure 7(d)Comparison with the single tenant case DPI deep packet inspec-tion FW firewallNAT TS traffic shaper VR virtual router DESTdestination

flexibility and reduced deployment costs of the cloud com-puting paradigm In this work the problem of evaluatingcomplexity and performance in terms of sustainable packetrate of virtual networking in cloud computing infrastruc-tures dedicated to NFV deployment was addressed AnOpenStack-based cloud platform was considered and deeplyanalyzed to fully understand the architecture of its virtualnetwork infrastructure To this end an ad hoc visual tool wasalso developed that graphically plots the different functionalblocks (and related interconnections) put in place by Neu-tron theOpenStack networking service Some exampleswereprovided in the paper

The analysis brought the focus of the performance inves-tigation on the two basic software switching elements nativelyadopted by OpenStack namely Linux Bridge and OpenvSwitch Their performance was first analyzed in a singletenant cloud computing scenario by running experiments ona standard OpenStack setup as well as in ad hoc stand-aloneconfigurations built with the specific purpose of observingthem in isolation The results prove that the Linux Bridge isthe critical bottleneck of the architecture whileOpen vSwitchshows an almost optimal behavior

The analysis was then extended to more complex scenar-ios assuming a data center hosting multiple tenants deploy-ing NFV environments The case studies considered first asimple dedicated deep packet inspection function followedby conventional address translation and routing and thena more realistic virtual network function chaining sharedamong a set of customers with increased levels of complexityResults about sustainable packet rate and throughput perfor-mance of the virtual network infrastructure were presentedand discussed

The main outcome of this work is that an open-sourcecloud computing platform such as OpenStack can be effec-tively adopted to deploy NFV in network edge data centersreplacing legacy telco central offices However this solutionposes some limitations to the network performance whichare not simply related to the hosting hardware maximumcapacity but also to the virtual network architecture imple-mented by OpenStack Nevertheless our study demonstratesthat some of these limitations can be mitigated with a carefulredesign of the virtual network infrastructure and an optimalplanning of the virtual network functions In any case suchlimitations must be carefully taken into account for anyengineering activity in the virtual networking arena

Obviously scaling up the system and distributing thevirtual network functions among several compute nodes willdefinitely improve the overall performance However in thiscase the role of the physical network infrastructure becomescritical and an accurate analysis is required in order to isolatethe contributions of virtual and physical components Weplan to extend our study in this direction in our future workafter properly upgrading our experimental test-bed

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

This work was partially funded by EIT ICT Labs Action Lineon Future Networking Solutions Activity no 152702015ldquoSDN at the EdgesrdquoThe authors would like to thankMr Giu-liano Santandrea for his contributions to the experimentalsetup

References

[1] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[2] NMMosharaf Kabir Chowdhury and R Boutaba ldquoA survey ofnetwork virtualizationrdquo Computer Networks vol 54 no 5 pp862ndash876 2010

[3] The Open Networking Foundation Software-Defined Network-ing The New Norm for Networks ONF White Paper The OpenNetworking Foundation 2012

[4] The European Telecommunications Standards Institute ldquoNet-work functions virtualisation (NFV) architectural frameworkrdquoETSI GS NFV 002 V121 The European TelecommunicationsStandards Institute 2014

[5] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[6] J Soares C Goncalves B Parreira et al ldquoToward a telco cloudenvironment for service functionsrdquo IEEE CommunicationsMagazine vol 53 no 2 pp 98ndash106 2015

[7] Open Networking Lab Central Office Re-Architected as Data-center (CORD) ONLab White Paper Open Networking Lab2015

Journal of Electrical and Computer Engineering 15

[8] K Pretz ldquoSoftware already defines our livesmdashbut the impact ofSDN will go beyond networking alonerdquo IEEEThe Institute vol38 no 4 p 8 2014

[9] OpenStack Project httpwwwopenstackorg[10] F Sans and E Gamess ldquoAnalytical performance evaluation of

different switch solutionsrdquo Journal of Computer Networks andCommunications vol 2013 Article ID 953797 11 pages 2013

[11] P Emmerich D Raumer F Wohlfart and G Carle ldquoPerfor-mance characteristics of virtual switchingrdquo in Proceedings of the3rd International Conference on Cloud Networking (CloudNetrsquo13) pp 120ndash125 IEEE Luxembourg City Luxembourg Octo-ber 2014

[12] R Shea F Wang H Wang and J Liu ldquoA deep investigationinto network performance in virtual machine based cloudenvironmentsrdquo in Proceedings of the 33rd IEEE Conference onComputer Communications (INFOCOM rsquo14) pp 1285ndash1293IEEE Ontario Canada May 2014

[13] P Rad R V Boppana P Lama G Berman and M JamshidildquoLow-latency software defined network for high performancecloudsrdquo in Proceedings of the 10th System of Systems EngineeringConference (SoSE rsquo15) pp 486ndash491 San Antonio Tex USAMay 2015

[14] S Oechsner and A Ripke ldquoFlexible support of VNF place-ment functions in OpenStackrdquo in Proceedings of the 1st IEEEConference on Network Softwarization (NETSOFT rsquo15) pp 1ndash6London UK April 2015

[15] G Almasi M Banikazemi B Karacali M Silva and J TraceyldquoOpenstack networking itrsquos time to talk performancerdquo inProceedings of the OpenStack Summit Vancouver Canada May2015

[16] F Callegati W Cerroni C Contoli and G SantandrealdquoPerformance of network virtualization in cloud computinginfrastructures the Openstack caserdquo in Proceedings of the 3rdIEEE International Conference on Cloud Networking (CloudNetrsquo14) pp 132ndash137 Luxemburg City Luxemburg October 2014

[17] F Callegati W Cerroni C Contoli and G Santandrea ldquoPer-formance of multi-tenant virtual networks in OpenStack-basedcloud infrastructuresrdquo in Proceedings of the 2nd IEEE Work-shop on Cloud Computing Systems Networks and Applications(CCSNA rsquo14) in Conjunction with IEEE Globecom 2014 pp 81ndash85 Austin Tex USA December 2014

[18] N Handigol B Heller V Jeyakumar B Lantz and N McK-eown ldquoReproducible network experiments using container-based emulationrdquo in Proceedings of the 8th International Con-ference on Emerging Networking Experiments and Technologies(CoNEXT rsquo12) pp 253ndash264 ACM December 2012

[19] P Bellavista F Callegati W Cerroni et al ldquoVirtual networkfunction embedding in real cloud environmentsrdquo ComputerNetworks vol 93 part 3 pp 506ndash517 2015

[20] M F Bari R Boutaba R Esteves et al ldquoData center networkvirtualization a surveyrdquo IEEE Communications Surveys ampTutorials vol 15 no 2 pp 909ndash928 2013

[21] The Linux Foundation Linux Bridge The Linux Foundation2009 httpwwwlinuxfoundationorgcollaborateworkgroupsnetworkingbridge

[22] J T Yu ldquoPerformance evaluation of Linux bridgerdquo in Proceed-ings of the Telecommunications SystemManagement ConferenceLouisville Ky USA April 2004

[23] B Pfaff J Pettit T Koponen K Amidon M Casado and SShenker ldquoExtending networking into the virtualization layerrdquo inProceedings of the 8th ACMWorkshop onHot Topics in Networks(HotNets rsquo09) New York NY USA October 2009

[24] G Santandrea Show My Network State 2014 httpssitesgooglecomsiteshowmynetworkstate

[25] R Jain and S Paul ldquoNetwork virtualization and softwaredefined networking for cloud computing a surveyrdquo IEEECommunications Magazine vol 51 no 11 pp 24ndash31 2013

[26] ldquoRUDE amp CRUDE Real-Time UDP Data Emitter amp Collectorfor RUDErdquo httpsourceforgenetprojectsrude

[27] iperf3 a TCP UDP and SCTP network bandwidth measure-ment tool httpsgithubcomesnetiperf

[28] nDPI Open and Extensible LGPLv3 Deep Packet InspectionLibrary httpwwwntoporgproductsndpi

Research ArticleServer Resource Dimensioning and Routing ofService Function Chain in NFV Network Architectures

V Eramo1 A Tosti2 and E Miucci1

1DIET ldquoSapienzardquo University of Rome Via Eudossiana 18 00184 Rome Italy2Telecom Italia Via di Val Cannuta 250 00166 Roma Italy

Correspondence should be addressed to E Miucci 1kronos1gmailcom

Received 29 September 2015 Accepted 18 February 2016

Academic Editor Vinod Sharma

Copyright copy 2016 V Eramo et alThis is an open access article distributed under the Creative CommonsAttribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The Network Function Virtualization (NFV) technology aims at virtualizing the network service with the execution of the singleservice components in Virtual Machines activated on Commercial-off-the-shelf (COTS) servers Any service is represented bythe Service Function Chain (SFC) that is a set of VNFs to be executed according to a given order The running of VNFs needs theinstantiation of VNF instances (VNFI) that in general are software components executed onVirtualMachines In this paper we copewith the routing and resource dimensioning problem in NFV architectures We formulate the optimization problem and due to itsNP-hard complexity heuristics are proposed for both cases of offline and online traffic demand We show how the heuristics workscorrectly by guaranteeing a uniform occupancy of the server processing capacity and the network link bandwidth A consolidationalgorithm for the power consumption minimization is also proposed The application of the consolidation algorithm allows for ahigh power consumption saving that however is to be paid with an increase in SFC blocking probability

1 Introduction

Todayrsquos networks are overly complex partly due to anincreasing variety of proprietary fixed-function appliancesthat are unable to deliver the agility and economics neededto address constantly changing market requirements [1]This is because network elements have traditionally beenoptimized for high packet throughput at the expense offlexibility thus hampering the deployment of new services[2] Network Function Virtualization (NFV) can provide theinfrastructure flexibility and agility needed to successfullycompete in todayrsquos evolving communications landscape [3]NFV implements network functions in software running on apool of shared commodity servers instead of using dedicatedproprietary hardware This virtualized approach decouplesthe network hardware from the network function and resultsin increased infrastructure flexibility and reduced hardwarecosts Because the infrastructure is simplified and stream-lined new and expended services can be created quickly andwith less expense Implementation of the paradigm has alsobeen proposed [4] and the performance has been investigated[5] To support the NVF technology both ETSI [6 7] and

IETF [8 9] are defining novel network architectures able toallocate resources for Virtualized Network Function (VNF)as well as manage and orchestrate NFV to support servicesIn particular the service is represented by a Service FunctionChain (SFC) [8] that is a set of VNFs that have to be executedaccording to a given order AnyVNF is run on aVNF instance(VNFI) implemented with one Virtual Machine (VM) whoseresources (Vcores RAM memory etc) are allocated to [10]Some solutions have been proposed in the literature to solvethe problem of choosing the servers where to instantiateVNF and to determine the network paths interconnectingthe VNFs [1] A formulation of the optimization problemis illustrated in [11] Three greedy algorithms and a tabusearch-based heuristic are proposed in [12] The extensionof the problem in the case in which virtual routers are alsoconsidered is proposed in [13]

The innovative contribution of our paper is that dif-ferently from [12] we follow an approach that allows forboth the resource dimensioning and the SFC routing Weformulate the optimization problem whose objective is theminimization of the number of dropped SFC requests withthe constraint that the SFCs are routed so as to respect both

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 7139852 12 pageshttpdxdoiorg10115520167139852

2 Journal of Electrical and Computer Engineering

the server processing capacity and network link bandwidthWith the problem being NP-hard we introduce a heuristicthat allows for (i) the dimensioning of the Virtual Machinesin terms of number of Vcores assigned and (ii) the SFCrouting through the VNF instance implemented by theVirtual Machines The heuristic performs simultaneouslythe dimensioning and routing operations Furthermore wepropose a consolidation algorithm based on Virtual Machinemigrations and able to achieve power consumption savingsThe proposed algorithms are evaluated in scenarios charac-terized by offline and online traffic demands

The paper is organized as follows The related work isdiscussed in Section 2 Section 3 is devoted to illustratingthe optimization problem and the proposed heuristic for theoffline traffic demand case In Section 4 we describe an SFCplanner in which the proposed heuristic is applied in thecase of online traffic demand The planner also implements aconsolidation algorithm whose application allows for powerconsumption savings Some numerical results are shown inSection 5 to prove the effectiveness of the proposed heuristicsFinally the main conclusions and future research items arementioned in Section 6

2 Related Work

The Internet Engineering Task Force (IETF) has formed theSFCWorking Group [8 9] to define Service Function Chain-ing related problems and to standardize the architecture andprotocols A Service Function Chain (SFC) is defined as aset of abstract service functions [15] and ordering constraintsthat must be applied to packets selected as a result of the clas-sification When virtual service functions are considered theSFC is referred to as Virtual Network Function ForwardingGraph (VNFFG) within the ETSI [6] To support the SFCsVirtual Network Function instances (VNFIs) are activatedand executed in COTS servers To achieve the economics ofscale expected from NFV network link and server resourcesshould be used efficiently For this reason efficient algorithmshave to be introduced to determine where to instantiate theVNFI and to route the SFCs by choosing the network pathsand the VNFI involved The algorithms have to take intoaccount the limited resources of the network links and theservers and pursued objectives of load balancing energysaving recovery from failure and so on [1] The task ofplacing SFC is closely related to virtual network embeddings[16] and virtual data network embedding [17] and maytherefore be formulated as an optimization problem witha particular objective The approach has been followed by[11 18ndash21] For instance Moens and Turck [11] formulate theSFCplacement problem as a linear optimization problem thathas the objective of minimizing the number of active serversOther objectives are pursued in [19] (latency remaining datarate number of used network nodes etc) and the SFCplacingis formulated as a mixed integer quadratically constrainedprogram

It has been proved that the SFC placing problem is NP-hard For this reason efficient heuristics have been proposedand evaluated [10 13 22 23] For example Xia et al [22]

formulate the placement and chaining problem as binaryinteger programming and propose a greedy heuristic inwhich the SFCs are first sorted according to their resourcedemands and the SFCs with the highest resource demandsare given priority for placement and routing

In order to achieve energy consumption saving the NFVarchitecture should allow for migrations of VNFI that is themigration of the Virtual Machine implementing the VNFIThough VNFI migrations allow for energy consumptionsaving they may impact the QoS performance received bythe users related to the migrated VNFs A model has beenproposed in [24] to derive some performance indicators suchas the whole service down time and the total migration timeso as to make function migrations decisions

The contribution of this paper is twofold (i) to pro-pose and to evaluate the performance of an algorithm thatperforms simultaneously resource dimensioning and SFCrouting and (ii) to investigate the advantages from the pointof view of the power consumption saving that the applicationof server consolidation techniques allow us to achieve

3 Offline Algorithms for SFC Routing inNFV Network Architectures

We consider the case in which SFC requests are knownin advance We formulate the optimization problem whoseobjective is the minimization of the number of dropped SFCrequests with the constraint that the SFCs are routed so as torespect both the server processing capacity and network linkbandwidth With the problem being NP-hard we introduce aheuristic that allows for (i) the dimensioning of the VirtualMachines in terms of the number of Vcores assigned and (ii)the SFC routing through the VNF instances implemented bythe Virtual MachinesThe heuristic performs simultaneouslythe dimensioning and routing operations

The section is organized as follows The network andtraffic model is introduced in Section 31 Sections 32 and 33are devoted to illustrating the optimization problem and theproposed heuristic respectively

31 Network and Traffic Model Next we introduce the mainterminology used to represent the physical network VNFand the SFC traffic request [25] We represent the physicalnetwork PN as a directed graph GPN

= (VPNEPN

)whereVPN andEPN are the sets of physical nodes and linksrespectively The set VPN of nodes is given by the union ofthe three node setsVPN

A VPNR andVPN

S that are the sets ofaccess switching and server nodes respectively The servernodes and links are characterized by the following

(i) 119873PNcore (119908) processing capacity of the server node 119908 isin

VPNS in terms of the number of cores available

(ii) 119862PN(119889) bandwidth of the physical link 119889 isin EPN

We assume that 119865 types of VNFs can be provisioned asfirewall IDS proxy load balancers and so on We denoteby F = 119891

1 1198912 119891

119865 the set of VNFs with 119891

119894being the

Journal of Electrical and Computer Engineering 3

ue1 e2 e31 2 t

BSFC(1) = 8333 BSFC(2) = 8333

CSFC(e1) = 1 CSFC(e2) = 1 CSFC(e3) = 1

Figure 1 An example of graph representing an SFC request for aflow characterized by the bandwidth of 1Mbps and packet length of1500 bytes

119894th VNF type The packet processing time of the VNF 119891119894is

denoted by 119905proc119894

(119894 = 1 119865)We also assume that the network operator is receiving 119879

Service Function Chain (SFC) requests known in advanceThe 119894th SFC request is characterized by the graph GSFC

119894=

(VSFC119894

ESFC119894

) where VSFC119894

represents the set of accessand VNF nodes and ESFC

119894(119894 = 1 119879) denotes the links

between them In particular the set VSFC119894

is given by theunion ofVSFC

119894119860andVSFC

119894119865denoting the set of access nodes

and VNFs respectively The graph is characterized by thefollowing parameters

(i) 120572V119908 assuming the value 1 if the access node V isin

119879

119894=1VSFC119894119860

characterizes an SFC request start-ingterminating fromto the physical access nodes119908 isinVPN

A otherwise its value is 0(ii) 120573V119896 assuming the value 1 if the VNF node V isin

119879

119894=1VSFC119894119865

needs the application of the VNF 119891119896(119896 isin

[1 119865]) otherwise its value is 0(iii) 119861SFC

(V) the processing capacity requested by theVNF node V isin ⋃

119879

119894=1VSFC119894119865

the parameter valuedepends on both the packet length and the bandwidthof the packet flow incoming to the VNF node

(iv) 119862SFC(119890) bandwidth requested by the link 119890 isin

119879

119894=1ESFC119894

An example of SFC request is represented in Figure 1 wherethe traffic emitted by the ingress access node 119906 has to behandled by the VNF nodes V

1and V

2and it is terminating

to the egress access node 119905 The access and VNF nodes areinterconnected by the links 119890

1 1198902 and 119890

3 If the flow band-

width is 1Mbps and the packet length is 1500 bytes we obtainthe processing capacities 119861SFC

(V1) = 119861

SFC(V2) = 8333

while the bandwidths 119862SFC(1198901) 119862SFC

(1198902) and 119862SFC

(1198903) of

all links equal 1

32 Optimization Problem for the SFC Routing and VNFInstance Dimensioning The objective of the SFC Routingand VNF Instance Dimensioning (SRVID) problem is tomaximize the number of accepted SFC requests The outputof the problem is characterized by the following (i) theservers in which the VNF nodes are executed and (ii) thenetwork paths inwhich the virtual links of the SFC are routedThis arrangement has to be accomplished without violatingboth the server processing and the physical link capacitiesWe assume that all of the servers create a VNF instance for

each type of VNF that will be shared by the SFCs using thatserver and requesting that type of VNF For this reason eachserver will activate 119865 VNF instances one for each type andanother output of the problem is to determine the number ofVcores to be assigned to each VNF instance

We assume that one virtual link of the SFC graphs can berouted through single physical network path We introducethe following notations

(i) P set of paths inGPN

(ii) 120575119889119901 the binary function assuming value 1 or 0 if the

network link 119889 belongs or does not to the path 119901 isin Prespectively

(iii) 119886PN(119901) and 119887PN

(119901) origin and destination nodes ofthe path 119901 isin P

(iv) 119886SFC(119889) and 119887SFC

(119889) origin and destination nodesof the virtual link 119890 isin ⋃119879

119894=1ESFC119894

Next we formulate the optimal SRVID problem characterizedby the following optimization variables

(i) 119909ℎ binary variable assuming the value 1 if the ℎth SFC

request is accepted otherwise its value is zero

(ii) 119910119908119896 integer variable characterizing the number of

Vcores allocated to the VNF instance of type 119896 in theserver 119908 isinVPN

S

(iii) 119911119896V119908 binary variable assuming the value 1 if the VNFnode V isin ⋃119879

119894=1VSFC119894119865

is served by the VNF instanceof type 119896 in the server 119908 isinVPN

S

(iv) 119906119889119901 binary variable assuming the value 1 if the virtual

link 119890 isin ⋃

119879

119894=1ESFC119894

is embedded in the physicalnetwork path 119901 isin P otherwise its value is zero

Next we report the constraints for the optimization variables

119865

sum

119896=1

119910119908119896le 119873

PNcore (119908)

119908 isinVPNS

(1)

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908 le 1 V isin119879

119894=1

VSFC119894119865

(2)

119911

119896

V119908 le 120573V119896

119896 isin [1 119865] V isin119879

119894=1

VSFC119894119865

119908 isinVPNS

(3)

sum

Visin⋃119879119894=1

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896

le 119910119908119896

119896 isin [1 119865] 119908 isinVPNS

(4)

4 Journal of Electrical and Computer Engineering

119906119889119901le 119911

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(5)

119906119889119901le 119911

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119865

119896 isin [1 119865] 119901 isin P

(6)

119906119889119901le 120572

119896

119886SFC(119889)119886

PN(119901)

119886

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(7)

119906119889119901le 120572

119896

119887SFC(119889)119887

PN(119901)

119887

SFC(119889) isin

119879

119894=1

VSFC119894119860

119896 isin [1 119865] 119901 isin P

(8)

sum

119901isinP

119906119889119901le 1 119889 isin

119879

119894=1

ESFC119894

(9)

sum

119889isin⋃119879

119894=1ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901le 119862

PN(119890) (10)

119909ℎle

119865

sum

119896=1

sum

119908isinVPNS

119911

119896

V119908

ℎ isin [1 119879] V isinVSFCℎ119865

(11)

119909ℎle sum

119901isinP

119906119889119901

ℎ isin [1 119879] 119889 isin ESFCℎ

(12)

Constraint (1) establishes the fact that at most a numberof Vcores equal to the available ones are used for each servernode 119908 isin VPN

S Constraint (2) expresses the conditionthat a VNF can be served by one only VNF instance Toguarantee that a node V isin VSFC

ℎ119865needing the application

of a VNF type is mapped to a correct VNF instance weintroduce constraint (3) Constraint (4) limits the numberof VNF nodes assigned to one VNF instance by taking intoaccount both the number of Vcores assigned to the VNFinstance and the required VNF node processing capacitiesConstraints (5)ndash(8) establish the fact that when the virtuallink 119889 is supported by the physical network path 119901 then119886

PN(119901) and 119887PN

(119901) must be the physical network nodesthat the nodes 119886SFC

(119889) and 119887SFC(119889) of the virtual graph

are assigned to The choice of mapping of any virtual link ona single physical network path is represented by constraint(9) Constraint (10) avoids any overloading on any physicalnetwork link Finally constraints (11) and (12) establish thefact that an SFC request can be accepted when the nodes and

the links of the SFC graph are assigned to VNF instances andphysical network paths

The problem objective is tomaximize the number of SFCsaccepted given by

max119879

sum

ℎ=1

119909ℎ (13)

The introduced optimization problem is intractable becauseit requires solving a NP-hard bin packing problem [26] Dueto its high complexity it is not possible to solve the problemdirectly in a timely manner given the large number of serversand network nodes For this reason we propose an efficientheuristic in Section 33

33Maximizing the Accepted SFCRequests Number (MASRN)Heuristic The Maximizing the Accepted SFC RequestsNumber (MASRN) heuristic is based on the embeddingalgorithm proposed in [14] and has the objective of maxi-mizing the number of accepted SFC requests The MASRNalgorithm tries routing the SFCs one by one by using the leastloaded link and server resources so as to balance their useAlgorithm 1 illustrates the operation mode of the MASRNalgorithmThe routing of a target SFC the 119896tarth is illustratedThemain inputs of the algorithmare (i) the set119879pre containingthe indexes of the SFC requests routed successfully before the119896tarth SFC request (ii) the 119896tarth SFC request graph GSFC

119896tar=

(VSFC119896tar

ESFC119896tar

) (iii) the values of the parameters 120572V119908 for the119896tarth SFC request (iv) the values of the variables 119911119896V119908 and 119906119889119901for the SFC requests successfully routed before the 119896tarth SFCrequest and defined in Section 32

The operation mode of the heuristic is based on thefollowing variables

(i) 119860119896(119908) the load of the cores allocated for the VNFinstance of type 119896 isin [1 119865] in the server 119908 isin

VPNS the value of 119860119896(119908) is initialized by taking into

account the core resource amount used by the SFCssuccessfully routed before the 119896tarth SFC requesthence its initial value is

119860

119896(119908) = sum

Visin⋃119894isin119879119896tar

VSFC119894119865

119911

119896

V119908119861SFC

(V) 119905proc119896 (14)

(ii) 119878119873(119908) defined as the stress of the node 119908 isin VPN

S

[14] and characterizing the server load its value isinitialized to

119878119873(119908) =

119865

sum

119896=1

119860

119896(119908) (15)

(iii) 119878119871(119890) defined as the stress of the physical network link

119890 isin EPN [14] and characterizing the link load itsvalue is initialized to

119878119871(119890) = sum

119889isin⋃119894isin119879119896tar

ESFC119894

119862

SFC(119889) sum

119901isinP

120575119890119901119906119889119901 (16)

Journal of Electrical and Computer Engineering 5

(1) Input Physical Network GraphGPN= (VPN

EPN) 119896tarth SFC request 119896tarth SFC request Graph

GSFC119896tar

= (VSFC119896tar

ESFC119896tar

) 119879pre120572V119908 V isinV

SFC119896tar 119860

119908 isinVPNA

119911

119896

V119908 119896 isin [1 119865] V isin ⋃119894isin119879pre VSFC119894119865

119908 isinVPNS

119906119889119901 119889 isin ⋃

119894isin119879preESFC119894

119901 isin P(2) Variables 119860119896(119908) 119896 isin [1 119865] 119908 isinVPN

S 119878119873(119908) 119908 isinVPN

S 119878119871(119890) 119890 isin EPN

119910119908119896 119896 isin [1 119865] 119908 isinVPN

S lowastEvaluation of the candidate mapping of GSFC

119896tar= (VSFC

119896tarESFC119896tar

) in GPN= (VPN

EPN)

lowast(3) assign the node V isinVSFC

119896tar 119860to the physical network node 119908 isinVPN

S according to the value of the parameter 120572V119908(4) select the nodes 119908 isinVPN

S and the physical network paths 119901 isin P to be assigned to the server nodes V isinVSFC119896tar 119865

and virtual links119889 isin ESFC

119896tar 119878by applying the algorithm proposed in [14] and based on the values of the node and link stresses 119878

119873(119908) and 119878

119871(119890)

determine the variable values119911

119896

V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

lowastServer Resource Availability Check Phaselowast(5) for V isinVSFC

119896tar 119865 119908 isinVPN

S 119896 isin [1 119865] do(6) if sum119865

119904=1(119904 =119896)119910119908119904+ lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil le 119873

PNcore (119908) then

(7) 119860

119896(119908) = 119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(8) 119878

119873(119908) = 119878

119873(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896

(9) 119910

119908119896= lceil119860

119896(119908) + 119911

119896

V119908119861SFC

(V)119905proc119896rceil

(10) else(11) REJECT the 119896tarth SFC request(12) end if(13) end for

lowastLink Resource Availability Check Phaselowast(14) for 119890 isin EPN do(15) if 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901 le 119862PN(119890) then

(16) 119878119871(119890) = 119878

119871(119890) + sum

119889isinESFC119896tar

CSFC(119889)sum

119901isinP 120575119890119901119906119889119901(17) else(18) REJECT the 119896tarth SFC request(19) end if(20) end for(21)Output 119911119896V119908 119896 isin [1 119865] V isinV

SFC119896tar 119865

119908 isinVPNS

119906119889119901 119889 isin ESFC

119896tar119901 isin P

Algorithm 1 MASRN algorithm

The candidate physical network links and nodes in whichto embed the target SFC are evaluated First of all the accessnodes are evaluated (line 3) according to the values of theparameters 120572V119908 V isin VSFC

119896tar 119860 119908 isin VPN

A Next the MASRNalgorithm selects a cluster of server nodes (line 4) that arenot lightly loaded but also likely to result in low substrate linkstresses when they are connected Details on the procedureare reported in [14]

In the next phases the resource availability in the selectedcluster is checked The resource availability in the servernodes and physical network links is verified in the Server(lines 5ndash13) and Link (lines 14ndash20) Resource AvailabilityCheck Phases respectively If resources are not available ineither links or nodes the target SFC request is rejected Inparticular in the Server Resource Availability Check Phasethe algorithm verifies whether the allocation of new Vcores is

needed and in this case their availability is checked (line 6)The variable 119910

119908119896is also updated in this phase (line 9)

Finally the outputs (line 21) of the MASRN algorithm arethe selected values for the variables 119911119896V119908 119896 isin [1 119865] V isinVSFC119896tar 119865

119908 isinVPNS and 119906

119889119901 119889 isin ESFC

119896tar119901 isin P

The computational complexity of the MASRN algorithmdepends on the procedure for the evaluation of the potentialnodes [14] Let 119873

119904be the total number of servers The

complexity of the phase in which the potential nodes areevaluated can be carried out according to the followingremarks (i) as many servers as the number of the VNFs ofthe SFC are determined and (ii) the ℎth server is selectedby evaluating the (119873

119904minus ℎ) shortest paths and by performing

a minimum operation of a list with (119873119904minus ℎ) elements

According to these remarks the computational complexityis given by O(119881(119881 + 119873

119904)119873119897log(119873

119904+ 119873119899)) where 119881 is the

6 Journal of Electrical and Computer Engineering

ConsolidationModule SFC Scheduler Resource

Monitor

SFC request

Migration decision

Schedulingdecision

Resource information

Orchestrator

Figure 2 SFC planner architecture

maximum number of VNFs in an SFC and119873119897and119873

119899are the

numbers of nodes and links of the network

4 Online Algorithms for SFC Routing in NFVNetwork Architectures

In the online approach the SFC requests arrive at the systemover the time and each of them is characterized by a certainduration In order to reduce the complexity of the onlineSFC resource allocation algorithm we designed an SFCplannerwhose general architecture is described in Section 41The operation mode of the SFC planner is based on somealgorithms that we describe in Section 42

41 SFC Planner Architecture The architecture of the SFCplanner is shown in Figure 2 It could make up the maincomponent of an orchestrator in the NFV architecture [6]It is composed of three components the SFC Scheduler theResource Monitor and the Consolidation Module

Once the SFC Scheduler receives an SFC request itexecutes an algorithm to verify if resources are available andto decide which resources to use

The physical resource as well as the Virtual Machinesrunning on the servers ismonitored by theResourceMonitorIf a failure of any physicalvirtual node or link occurs itnotifies the event to the SFC Scheduler

The Consolidation Module allows for the server resourceconsolidation in low traffic periods When the consolidationtechnique is applied VMs are migrated to as fewer serversas possible the remaining ones are turned off and powerconsumption saving can be achieved

42 SFC Scheduling and Consolidation Algorithms In thissection we describe the algorithms executed by the SFCScheduler and the Consolidation Module

Whenever a new SFC request arrives at the SFC plannerthe SFC scheduling algorithm is executed in order to findthe best embedding in the system maintaining a uniformusage of the network resource The main steps of thisprocedure are reported in the flow chart in Figure 3 andare based on MASRN heuristic described in Section 33The algorithm starts selecting the set of servers on whichthe VNFs requested by the SFC will be executed In thesecond step the physical paths on which the virtual links willbe mapped are selected If there are not enough available

SFC request

Servers selectionusing the potential

factor [26]

Enoughresources

Yes

No Drop the SFCrequest

Path selection

Enoughresources

Yes

No Drop the SFCrequest

Accept the SFCrequest

Figure 3 Flow chart of the SFC scheduling algorithm

resources in the first or in the second step the request isdropped

The flow chart of the consolidation algorithm is reportedin Figure 4 Each server 119904 isin VPN

S is characterized bya parameter 120578

119904defined as the ratio of the total amount

of incomingoutgoing traffic Λ119904that it is handling to the

power 119875119904it is consuming that is 120578

119904= Λ119904119875119904 The power

consumption model assumes a constant power contributionand a variable one that linearly increases versus the handledtraffic The server power consumption is expressed by

119875119904= 119875idle + (119875max minus 119875idle)

119871119904

119862119904

forall119904 isinVPNS (17)

where 119875idle is the power consumed by the server whenno traffic is handled 119875max is the maximum server powerconsumption 119871

119904is the total load handled by the server 119904 and

119862119904is its maximum capacity The maximum power that the

server can consume is expressed as119875max = 119875idle119886 where 119886 is aparameter characterizing how much the power consumptionis depending on the handled traffic The power consumptionis as much more rate adaptive as 119886 is lower For 119886 = 1 therate adaptive power consumption is absent and the consumedpower is constant

The proposed algorithm tries switching off the serverwith minimum power consumption per bit carried For thisreason when the consolidation algorithm is executed it startsselecting the server 119904min with the minimum value of 120578

119904as

Journal of Electrical and Computer Engineering 7

Resource consolidation request

Enoughresources

NoNo

Perform migration

Remove the serverfrom the set

The set is empty

Yes

Yes

Select the set of possible

destinations

Select the first server of the set

Select the server with minimum 120578s

Sort the set based on descending

values of 120578s

Figure 4 Flow chart of the SFC consolidation algorithm

the one to turn off A set of possible destination serversable to host the VMs executed on 119904min is then evaluated andsorted in descending order based again on the value of 120578

119904The

algorithm proceeds selecting the first element of the orderedset and evaluating if there are enough resources in the site toreroute all the flows generated from or terminated to 119904min Ifit succeeds the migration is performed and the server 119904min isturned off If it fails the destination server is removed and thenext server of the list is considered When the ordered set ofpossible destinations becomes empty another server to turnoff is selected

The SFC Consolidation Module also manages theresource deconsolidation procedure when the reactivationof physical serverslinks is needed Basically thedeconsolidation algorithm selects the most loaded VMsalready migrated to a new server and moves them to theiroriginal servers The flows handled by these VMs areaccordingly rerouted

5 Numerical Results

We evaluate the effectiveness of the proposed algorithmsin the case of the network scenario reported in Figure 5The network is composed of five core nodes five edgenodes and six access nodes in which the SFC requests arerandomly generated and terminated A Network FunctionVirtualization (NFV) site is connected to edge nodes 3 and4 through two access routers 1 and 2 The NFV site is alsocomposed of two switches and eight servers In the basicconfiguration 40Gbps links are considered except the linksconnecting the server to the switches in the NFV sites whose

rate is equal to 10 GbpsThe sixteen servers are equipped with48 Vcores each

Each SFC request is composed of three Virtual NetworkFunctions (VNFs) that is a firewall a load balancer andVPNencryption whose packet processing times are assumed to beequal to 708120583s 065 120583s and 164 120583s [27] respectivelyWe alsoassume that the load balancer splits the input traffic towardsa number of output links chosen uniformly from 1 to 3

An example of graph for a generated SFC is reported inFigure 6 in which 119906

119905is an ingress access node and V1

119905 V2119905

and V3119905are egress access nodes The first and second VNFs

are a firewall and VPN encryption the third VNF is a loadbalancer splitting the traffic towards three output links In theconsidered case study we also assume that the SFC handlestraffic flows of packet length equal to 1500 bytes and requiredbandwidth uniformly distributed from 500Mbps to 1 Gbps

51 Offline SFC Scheduling In the offline case we assume thata given number 119879 of SFC requests are a priori known For thecase study previously described we apply theMASRN heuris-tic proposed in Section 33 We report the percentage of thedropped SFCs in Figure 7 as a function of the offered numberof SFCs We report the curve for the basic scenario and theones in which the link capacities are doubled quadrupledand increased per ten times respectively From Figure 7 wecan see how in order to have a dropped SFC percentage lowerthan 10 the number of SFCs requests has to be smaller than60 in the case of basic scenario This remarkable blocking isdue to the shortage of network capacity In fact we can noticefrom Figure 7 how the dropped SFC percentage significantlydecreases when the link bandwidth increases For instancewhen the capacity is quadrupled the network infrastructureis able to accommodate up to 180 SFCs without any loss

This is confirmed in Figure 8 where we report the numberof Vcores occupied in the servers in the case of 119879 = 360Each of the servers is represented on the 119909-axis with theidentification (ID) of Figure 5 We also show in Figure 8 thenumber of Vcores allocated to each firewall load balancerand VPN encryption VNF with the blue green and redcolors respectively The basic scenario case and the ones inwhich the link capacity is doubled quadrupled and increasedby 10 times are reported in Figures 8(a) 8(b) 8(c) and8(d) respectively From Figure 8 we can notice how theMASRN heuristic works correctly by guaranteeing a uniformoccupancy of the servers in terms of total number of Vcoresused We can also notice how the firewall VNF is the oneneeding the higher number of Vcores allocated That is aconsequence of the higher processing time that the firewallVNF requires with respect to the load balancer and VPNencryption VNFs

52 Online SFC Scheduling The numerical results areachieved in the following traffic scenario The traffic patternwe consider is supposed to be sinusoidal with a peak valueduring daytime and minimum value during nighttime Wealso assume that SFC requests follow a Poisson processand the SFC duration time is distributed according to an

8 Journal of Electrical and Computer Engineering

NFV site

Server6

Server5

Server3

Server1

Server2

Server4

Server16

Server10

3Edge

Edge 1

2Edge

5Edge

4Edge

Core 3

Core 1

Core 2 Core 5

Core 4

Accessnode 5

Accessnode 3

Accessnode 6

Accessnode 1

Accessnode 2

Accessnode 4

Server7

Server8

Server9

1Switch

2Switch

1Router

2Router

Server14

Server12

Server11

Server13

Server15

Figure 5 The network topology is composed of six access nodes five edge nodes and five core nodes The NFV site is composed of tworouters two switches and sixteen servers

exponential distributionThe following parameters define theSFC requests in the Peak Hour Interval (PHI)

(i) 120582 is the arrival rate of SFC requests

(ii) 120583 is the termination rate of an embedded SFC

(iii) [120573min 120573

max] is the range in which the bandwidth

request for an SFC is randomly selected

We consider 119870 time intervals in a day and each of them ischaracterized by a scaling factor 120572

ℎwith ℎ = 0 119870 minus 1 In

order to capture the sinusoidal shape of the traffic in a day 120572ℎ

is evaluated with the following expression120572ℎ

=

1 if ℎ = 0

1 minus 2

119870

(1 minus 120572min) ℎ = 1

119870

2

1 minus 2

119870 minus ℎ

119870

(1 minus 120572min) ℎ =

119870

2

+ 1 119870 minus 1

(18)

where 1205720and 120572min refer to the peak traffic and the minimum

traffic scenario respectivelyThe parameter 120572

ℎaffects the SFC arrival rate and the

bandwidth requested In particular we have the following

Journal of Electrical and Computer Engineering 9

FW EV LB

Firewall VNF

VPN encryption VNF

Load balancerVNF

FW EV LB

ut

1t

2t

3t

Figure 6 An example of SFC composed of one firewall VNF oneVPN encryption VNF and one load balancer VNF that splits theinput traffic towards three output logical links 119906

119905denotes the ingress

access node while V1119905 V2119905 and V3

119905denote the egress access nodes

30 60 120 180 240 300 360Number of offered SFC requests

Link capacity times 1

Link capacity times 2

Link capacity times 4

Link capacity times 10

0

10

20

30

40

50

60

70

80

90

Perc

enta

ge o

f dro

pped

SFC

requ

ests

Figure 7 Dropped SFC percentage as a function of the number ofSFCs offeredThe four curves are reported for the basic scenario caseand the ones in which the link capacities are doubled quadrupledand increased by 10 times

expression for the SFC request rate 120582ℎand the minimum

and the maximum requested bandwidths 120573minℎ

and 120573maxℎ

respectively in the interval ℎ (ℎ = 0 119870 minus 1)

120582ℎ= 120572ℎ120582

120573

minℎ

= 120572ℎ120573

min

120573

maxℎ

= 120572ℎ120573

max

(19)

Next we evaluate the performance of SFC planner proposedin Section 4 in dynamic traffic scenario and for parametervalues 120573min

= 500Mbps 120573max= 1Gbps 120582 = 1 richmin and

120583 = 115minminus1 We also assume 119870 = 24 with traffic changeand consequently application of the consolidation algorithmevery hour

Finally we consider servers whose power consumption ischaracterized by the expression (17) with parameters 119875max =1000W and 119862

119904= 10Gbps

We report the SFC blocking probability as a function ofthe average number of offered SFC requests during the PHI(120582120583) in Figure 9 We show the results in the case of basic

link capacity scenario and in the ones obtained considering acapacity of the links that is twice and four times higher thanthe basic one We consider the case of server with no rateadaptive power consumption (119886 = 1) As we can observewhen the offered traffic is low the degradation in terms ofSFC blocking probability introduced by the consolidationtechnique increases In fact in this low traffic scenario moreresource consolidation is possible with the consequence ofhigher SFC blocking probabilities When the offered trafficincreases the blocking probability with or without applyingthe consolidation technique does not change much becausethe network is heavily loaded and only few servers can beturned off Furthermore as we can expect the blockingprobability decreases when the links capacity increases Theperformance loss due to the application of the consolidationtechnique is what we have to pay in order to obtain benefitsin terms of power saved by turning off servers The curvesin Figure 10 compare the power consumption percentagesavings that we can achieve in the cases 119886 = 01 and 119886 = 1The results show that when the offered traffic decreases andthe link capacity increases higher power consumption savingcan be achieved The reason is that we have more resourceson the links and less loaded VMs that allow for a highernumber of VM migrations and resource consolidation Theother result to notice is that the use of rate adaptive serversreduces the power consumption saving when we performresource consolidation This is due to the lower value 119875idle ofthe constant power consumption of the server that leads tolower power consumption saving when a server is switchedoff

6 Conclusions

The aim of this paper has been to propose heuristics for theresource dimensioning and the routing of Service FunctionChain in network architectures employing the NetworkFunction Virtualization technology We have introduced theoptimization problem that has the objective of minimizingthe number of SFCs offered and the compliance of server pro-cessing and link bandwidth capacity With the problem beingNP-hard we have proposed the Maximizing the AcceptedSFC Requests Number (MASRN) heuristic that is based onthe uniform occupancy of the server and link resources Theheuristic is also able to dimension the Vcores of the servers byevaluating the number of Vcores to be allocated to each VNFinstance

An SFC planner has been already proposed for thedynamic traffic scenario The planner is based on a con-solidation algorithm that allows for the switching off ofservers during the low traffic periods A case study hasbeen analyzed in which we have shown that the heuristicsworks correctly by uniformly allocating the server resourcesand reserving a higher number of Vcores for the VNFscharacterized by higher packet processing time Furthermorethe proposed SFC planner allows for a power consumptionsaving dependent on the offered traffic and varying from10 to 70 Such a saving is to be paid with an increasein SFC blocking probability As future research we willpropose and investigate Virtual Machine migration policies

10 Journal of Electrical and Computer Engineering

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

Load balancer VNFVPN encryption VNFFirewall VNF

T = 360

Link capacity times 1

06

12182430364248

allo

cate

d pe

r VN

FI

(a)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 2

06

12182430364248

allo

cate

d pe

r VN

FI

(b)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 4

06

12182430364248

allo

cate

d pe

r VN

FI

(c)

Load balancer VNFVPN encryption VNFFirewall VNF

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Num

ber o

f Vco

res

Server ID

T = 360

Link capacity times 10

06

12182430364248

allo

cate

d pe

r VN

FI

(d)

Figure 8 Number of allocated Vcores in each server The number of Vcores for firewall load balancer and encryption VPN VNFs arerepresented with blue green and red colors

Link capacity times 4 (cons)Link capacity times 4 (no cons)Link capacity times 2 (cons)Link capacity times 2 (no cons)Link capacity times 1 (cons)Link capacity times 1 (no cons)

a = 1 (no rate adaptive)

60 120 180 240 300 36030Average number of offered SFC requests

100E minus 08

100E minus 07

100E minus 06

100E minus 05

100E minus 04

100E minus 03

100E minus 02

100E minus 01

100E + 00

SFC

bloc

king

pro

babi

lity

Figure 9 SFC blocking probability as a function of the average number of SFCs offered in the PHI for the cases in which the consolidationtechnique is applied and it is not The server power consumption is constant (119886 = 1)

Journal of Electrical and Computer Engineering 11

Link capacity times 1 (a = 1)Link capacity times 1 (a = 01)Link capacity times 2 (a = 1)Link capacity times 2 (a = 01)Link capacity times 4 (a = 1)Link capacity times 4 (a = 01)

60 120 180 240 300 36030Average number of offered SFC requests

0

10

20

30

40

50

60

70

80

Pow

er co

nsum

ptio

n sa

ving

()

Figure 10The power consumption percentage saving achievedwiththe application of the consolidation technique versus the averagenumber of SFCs offered in the PHI The cases 119886 = 1 and 119886 = 01are considered

allowing for the achievement of a right trade-off betweenpower consumption saving and SFC blocking probabilitydegradation

Competing Interests

The authors declare that they have no competing interests

Acknowledgments

Thework described in this paper has been funded by TelecomItalia within the research agreement titled ldquoDefinition andEvaluation of Protocols and Architectures of a NetworkAutomation Platform for TI-NVF Infrastructurerdquo

References

[1] R Mijumbi J Serrat J Gorricho N Bouten F De Turckand R Boutaba ldquoNetwork function virtualization state-of-the-art and research challengesrdquo IEEE Communications Surveys ampTutorials vol 18 no 1 pp 236ndash262 2016

[2] PVeitchM JMcGrath andV Bayon ldquoAn instrumentation andanalytics framework for optimal and robust NFV deploymentrdquoIEEE Communications Magazine vol 53 no 2 pp 126ndash1332015

[3] R Yu G Xue V T Kilari and X Zhang ldquoNetwork functionvirtualization in the multi-tenant cloudrdquo IEEE Network vol 29no 3 pp 42ndash47 2015

[4] Z Bronstein E Roch J Xia andAMolkho ldquoUniformhandlingand abstraction of NFV hardware acceleratorsrdquo IEEE Networkvol 29 no 3 pp 22ndash29 2015

[5] B Han V Gopalakrishnan L Ji and S Lee ldquoNetwork functionvirtualization challenges and opportunities for innovationsrdquoIEEE Communications Magazine vol 53 no 2 pp 90ndash97 2015

[6] ETSI Industry Specification Group (ISG) NFV ldquoETSI GroupSpecifications on Network Function Virtualization 1stPhase Documentsrdquo January 2015 httpdocboxetsiorgISGNFVOpen

[7] H Hawilo A Shami M Mirahmadi and R Asal ldquoNFV stateof the art challenges and implementation in next generationmobile networks (vepc)rdquo IEEE Network vol 28 no 6 pp 18ndash26 2014

[8] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) 2015 httpsdatatrackerietforgwgsfccharter

[9] The Internet Engineering Task Force (IETF) Service FunctionChaining (SFC) Working Group (WG) Documents 2015httpsdatatrackerietforgwgsfcdocuments

[10] M Yoshida W Shen T Kawabata K Minato and W ImajukuldquoMORSA a multi-objective resource scheduling algorithm forNFV infrastructurerdquo in Proceedings of the 16th Asia-PacificNetwork Operations and Management Symposium (APNOMSrsquo14) pp 1ndash6 Hsinchum Taiwan September 2014

[11] H Moens and F D Turck ldquoVnf-p a model for efficientplacement of virtualized network functionsrdquo in Proceedingsof the 10th International Conference on Network and ServiceManagement (CNSM rsquo14) pp 418ndash423 November 2014

[12] RMijumbi J Serrat J L Gorricho N Bouten F De Turck andS Davy ldquoDesign and evaluation of algorithms for mapping andscheduling of virtual network functionsrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 University College London London UK April 2015

[13] S Clayman E Maini A Galis A Manzalini and N MazzoccaldquoThe dynamic placement of virtual network functionsrdquo inProceedings of the IEEE Network Operations and ManagementSymposium (NOMS rsquo14) pp 1ndash9 Krakow Poland May 2014

[14] Y Zhu and M H Ammar ldquoAlgorithms for assigning substratenetwork resources to virtual network componentsrdquo in Proceed-ings of the 25th IEEE International Conference on ComputerCommunications (INFOCOM rsquo06) pp 1ndash12 IEEE BarcelonaSpain April 2006

[15] G Lee M Kim S Choo S Pack and Y Kim ldquoOptimal flowdistribution in service function chainingrdquo in Proceedings of the10th International Conference on Future Internet (CFI rsquo15) pp17ndash20 ACM Seoul Republic of Korea June 2015

[16] A Fischer J F Botero M T Beck H De Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys and Tutorials vol 15 no 4 pp 1888ndash19062013

[17] M Rabbani R Pereira Esteves M Podlesny G Simon L Zam-benedetti Granville and R Boutaba ldquoOn tackling virtual datacenter embedding problemrdquo in Proceedings of the IFIPIEEEInternational Symposium on Integrated Network Management(IM rsquo13) pp 177ndash184 Ghent Belgium May 2013

[18] A Basta W Kellerer M Hoffmann H J Morper and K Hoff-mann ldquoApplying NFV and SDN to LTE mobile core gatewaysthe functions placement problemrdquo in Proceedings of the 4thWorkshop on All Things Cellular Operations Applications andChallenges (AllThingsCellular rsquo14) pp 33ndash38 ACM Chicago IllUSA August 2014

12 Journal of Electrical and Computer Engineering

[19] M Bagaa T Taleb and A Ksentini ldquoService-aware networkfunction placement for efficient traffic handling in carriercloudrdquo in Proceedings of the IEEE Wireless Communicationsand Networking Conference (WCNC rsquo14) pp 2402ndash2407 IEEEIstanbul Turkey April 2014

[20] SMehraghdamM Keller andH Karl ldquoSpecifying and placingchains of virtual network functionsrdquo in Proceedings of the IEEE3rd International Conference on Cloud Networking (CloudNetrsquo14) pp 7ndash13 October 2014

[21] M Bouet J Leguay and V Conan ldquoCost-based placement ofvDPI functions in NFV infrastructuresrdquo in Proceedings of the1st IEEE Conference on Network Softwarization (NetSoft rsquo15) pp1ndash9 London UK April 2015

[22] M Xia M Shirazipour Y Zhang H Green and ATakacs ldquoNetwork function placement for NFV chainingin packetoptical datacentersrdquo Journal of Lightwave Technologyvol 33 no 8 Article ID 7005460 pp 1565ndash1570 2015

[23] OHyeonseok YDaeun C Yoon-Ho andKNamgi ldquoDesign ofan efficient method for identifying virtual machines compatiblewith service chain in a virtual network environmentrdquo Interna-tional Journal ofMultimedia and Ubiquitous Engineering vol 11no 9 Article ID 197208 2014

[24] W Cerroni and F Callegati ldquoLive migration of virtual networkfunctions in cloud-based edge networksrdquo in Proceedings of theIEEE International Conference on Communications (ICC rsquo14)pp 2963ndash2968 IEEE Sydney Australia June 2014

[25] P Quinn and J Guichard ldquoService function chaining creatinga service plane via network service headersrdquo Computer vol 47no 11 pp 38ndash44 2014

[26] M F Zhani Q Zhang G Simona and R Boutaba ldquoVDCPlanner dynamicmigration-awareVirtualDataCenter embed-ding for cloudsrdquo in Proceedings of the IFIPIEEE InternationalSymposium on Integrated NetworkManagement (IM rsquo13) pp 18ndash25 May 2013

[27] M Dobrescu K Argyraki and S Ratnasamy ldquoToward pre-dictable performance in software packet-processing platformsrdquoin Proceedings of the 9th USENIX Symposium on NetworkedSystems Design and Implementation pp 1ndash14 April 2012

Research ArticleA Game for Energy-Aware Allocation ofVirtualized Network Functions

Roberto Bruschi1 Alessandro Carrega12 and Franco Davoli12

1CNIT-University of Genoa Research Unit 16145 Genoa Italy2DITEN University of Genoa 16145 Genoa Italy

Correspondence should be addressed to Alessandro Carrega alessandrocarregaunigeit

Received 2 October 2015 Revised 22 December 2015 Accepted 11 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Roberto Bruschi et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Network FunctionsVirtualization (NFV) is a network architecture concept where network functionality is virtualized and separatedinto multiple building blocks that may connect or be chained together to implement the required services The main advantagesconsist of an increase in network flexibility and scalability Indeed each part of the service chain can be allocated and reallocated atruntime depending on demand In this paper we present and evaluate an energy-aware Game-Theory-based solution for resourceallocation of Virtualized Network Functions (VNFs) within NFV environments We consider each VNF as a player of the problemthat competes for the physical network node capacity pool seeking the minimization of individual cost functions The physicalnetwork nodes dynamically adjust their processing capacity according to the incoming workload by means of an Adaptive Rate(AR) strategy that aims at minimizing the product of energy consumption and processing delay On the basis of the result of thenodesrsquo AR strategy the VNFsrsquo resource sharing costs assume a polynomial form in the workflows which admits a unique NashEquilibrium (NE) We examine the effect of different (unconstrained and constrained) forms of the nodesrsquo optimization problemon the equilibrium and compare the power consumption and delay achieved with energy-aware and non-energy-aware strategyprofiles

1 Introduction

In the last few years power consumption has shown a grow-ing and alarming trend in all industrial sectors particularly inInformation and Communication Technology (ICT) Publicorganizations Internet Service Providers (ISPs) and telecomoperators started reporting alarming statistics of networkenergy requirements and of the related carbon footprint sincethe first decade of the 2000s [1] The Global e-SustainabilityInitiative (GeSI) estimated a growth of ICT greenhouse gasemissions (in GtCO

2e Gt of CO

2equivalent gases) to 23

of global emissions (from 13 in 2002) in 2020 if no GreenNetwork Technologies (GNTs) would be adopted [2] On theother hand the abatement potential of ICT in other industrialsectors is seven times the size of the ICT sectorrsquos own carbonfootprint

Only recently due to the rise in energy price the contin-uous growth of customer population the increase in broad-band access demand and the expanding number of services

being offered by telecoms and providers has energy efficiencybecome a high-priority objective also for wired networks andservice infrastructures (after having started to be addressedfor datacenters and wireless networks)

The increasing network energy consumption essentiallydepends onnew services offeredwhich followMoorersquos law bydoubling every two years and on the need to sustain an ever-growing population of users and user devices In order to sup-port new generation network infrastructures and related ser-vices telecoms and ISPs need a larger equipment base withsophisticated architecture able to perform more and morecomplex operations in a scalable way Notwithstanding theseefforts it is well known that most networks and networkingequipment are currently still provisioned for busy or rushhour load which typically exceeds their average utilization bya wide margin While this margin is generally reached in rareand short time periods the overall power consumption intodayrsquos networks remains more or less constant with respectto different traffic utilization levels

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 4067186 10 pageshttpdxdoiorg10115520164067186

2 Journal of Electrical and Computer Engineering

The growing trend toward implementing networkingfunctionalities by means of software [3] on general-purposemachines and of making more aggressive use of virtualiza-tionmdashas represented by the paradigms of Software DefinedNetworking (SDN) [4] andNetwork FunctionsVirtualization(NFV) [5]mdashwould also not be sufficient in itself to reducepower consumption unless accompanied by ldquogreenrdquo opti-mization and consolidation strategies acting as energy-awaretraffic engineering policies at the network level [6] At thesame time processing devices inside the network need to becapable of adapting their performance to the changing trafficconditions by trading off power andQuality of Service (QoS)requirements Among the various techniques that can beadopted to this purpose to implement Control Policies (CPs)in network processing devices Dynamic Adaptation onesconsist of adapting the processing rate (AR) or of exploitinglow power consumption states in idle periods (LPI) [7]

In this paper we introduce a Game-Theory-based solu-tion for energy-aware allocation of Virtualized NetworkFunctions (VNFs) within NFV environments In more detailin NFV networks a collection of service chains must be allo-cated on physical network nodes A service chain is a set ofone or more VNFs grouped together to provide specific ser-vice functionality and can be represented by an orientedgraph where each node corresponds to a particular VNF andeach edge describes the operational flow exchanged betweena pair of VNFs

A service request can be allocated on dedicated hardwareor by using resources deployed by the Service Provider(SP) that processes the request through virtualized instancesBecause of this two types of service deployments are possiblein an NFV network (i) on physical nodes and (ii) on virtual-ized instances

In this paper we focus on the second type of servicedeployment We refer to this paradigm as pure NFV Asalready outlined above the SP processes the service requestby means of VNFs In particular we developed an energy-aware solution to the problem ofVNFsrsquo allocation on physicalnetwork nodesThis solution is based on the concept of GameTheory (GT) GT is used to model interactions among self-interested players and predict their choice of strategies tooptimize cost or utility functions until a Nash Equilibrium(NE) is reached where no player can further increase itscorresponding utility through individual action (see eg [8]for specific applications in networking)

More specifically we consider a bank of physical networknodes (in this paper we also use the terms node and resourceto refer to the physical network node) performing tasks onrequests submitted by playersrsquo population Hence in thisgame the role of the players is represented by VNFs thatcompete for the processing capacity pool each by seeking theminimization of an individual cost function The nodes candynamically adjust their processing capacity according tothe incoming workload (the processing power required byincoming VNF requests) by means of an AR strategy thataims at minimizing the product of energy consumption andprocessing delay On the basis of the result of the nodesrsquo ARstrategy the VNFsrsquo resource sharing costs assume a poly-nomial form in the workloads which admits a unique NE

We examine the effect of different (unconstrained and con-strained) forms of the nodesrsquo optimization problem on theequilibrium and compare the power consumption and delayachieved with energy-aware and non-energy-aware strategyprofiles

The rest of the paper is organized as follows Section 2discusses the related work Section 3 briefly summarizes thepowermodel andARoptimization thatwas introduced in [9]Although the scenario in [9] is different it is reasonable toassume that similar considerations are valid here too On thebasis of this result we derive a number of competitive strate-gies for the VFNsrsquo allocation in Section 4 Section 5 presentsvarious numerical evaluations and Section 6 contains theconclusions

2 Related Work

We now discuss the most relevant works that deal with theresource allocation of VNFs within NFV environments Wedecided not only to describe solutions based on GT but alsoto provide an overview of the current state of the art in thisarea As introduced in the previous section the key enablingparadigms that will considerably affect the dynamics of ICTnetworks are SDN and NFV which are discussed in recentsurveys [10ndash12] Indeed SPs and Network Operators (NOs)are facing increasing problems to design and implementnovel network functionalities following rapid changes thatcharacterize the current ISPs and telecom operators (TOs)[13]

Virtualization represents an efficient and cost-effectivestrategy to exploit and share physical network resourcesIn this context the Network Embedding Problem (NEP)has been considered in several recent works [14ndash19] Inparticular the Virtual Network Embedding Problem (VNEP)consists of finding the mapping between a set of requestsfor virtual network resources and the available underlyingphysical infrastructure (the so-called substrate) ensuring thatsome given performance requirements (on nodes and links)are guaranteed Typical node requirements are computationalresources (ie CPU) or storage space whereas links have alimited bandwidth and introduce a delay It has been shownthat this problem is NP-hard (it includes as subproblemthe multiway separator problem) For this reason heuristicapproaches have been devised [20]

The consolidation of virtual resources is considered in[18] by taking into account energy efficiency The problem isformulated as a mixed integer linear programming (MILP)model to understand the potential benefits that can beachieved by packing many different virtual tasks on the samephysical infrastructure The observed energy saving is up to30 for nodes and up to 25 for link energy consumptionReference [21] presents a solution for the resilient deploymentof network functions using OpenStack for the design andimplementation of the proposed service orchestrator mech-anism

An allocation mechanism based on auction theory isproposed in [22] In particular the scheme selects the mostremunerative virtual network requests while satisfying QoSrequirements and physical network constraintsThe system is

Journal of Electrical and Computer Engineering 3

split into two network substrates modeling physical and vir-tual resources with the final goal of finding the best mappingof virtual nodes and links onto physical ones according to theQoS requirements (ie bandwidth delay and CPU bounds)

A novel network architecture is proposed in [23] to pro-vide efficient coordinated control of both internal networkfunction state and network forwarding state in order tohelp operators achieve the following goals (i) offering andsatisfying tight service level agreements (SLAs) (ii) accu-rately monitoring and manipulating network traffic and (iii)minimizing operating expenses

Various engineering problems where the action of onecomponent has some impacts on the other components havebeen modeled by GT Indeed GT which has been applied atthe beginning in economics and related domains is gainingmuch interest today as a powerful tool to analyze and designcommunication networks [24] Therefore the problems canbe formulated in the GT framework and a stable solution forthe components is obtained using the concept of equilibrium[25] In this regard GT has been used extensively to developunderstanding of stable operating points for autonomousnetworks The nodes are considered as the players Payofffunctions are often defined according to achieved connectionbandwidth or similar technical metrics

To construct algorithms with provable convergence toequilibrium points many approaches consider networkmodels that can be mapped to specially constructed gamesAmong this type of games potential games use a real-valuedfunction that represents the entire player set to optimize someperformance metric [8 26] We mention Altman et al [27]who provide an extensive survey on networking games Themodels and papers discussed in this reference mostly dealwith noncooperativeGT the only exception being a short sec-tion focused on bargaining games Finally Seddiki et al [25]presented an approach based on two-stage noncooperativegames for bandwidth allocation which aims at reducing thecomplexity of networkmanagement and avoiding bandwidthperformance problems in a virtualized network environmentThe first stage of the game is the bandwidth negotiationwhere the SP requests bandwidth from multiple Infrastruc-ture Providers (InPs) Each InP decides whether to accept ordeny the request when the SP would cause link congestionThe second stage is the bandwidth provisioning game wheredifferent SPs compete for the bandwidth capacity of a physicallink managed by a single InP

3 Modeling Power Management Techniques

Dynamic Adaptation techniques in physical network nodesreduce the energy usage by exploiting the fact that systemsdo not need to run at peak performance all the time

Rate adaptation is obtained by tuning the clock frequencyandor voltage of processors (DVFS Dynamic Voltage andFrequency Scaling) or by throttling the CPU clock (ie theclock signal is disabled for some number of cycles at regularintervals) Decreasing the operating frequency and the volt-age of the node or throttling its clock obviously allows thereduction of power consumption and of heat dissipation atthe price of slower performance

Advantage can be taken of these adaptation capabilities todevise control techniques based on feedback on the incomingload One of the first proposals is that examined in a seminalpaper by Nedevschi et al [28] Among other possibilities weconsider here the simple optimization strategy developed in[9] aimed at minimizing the product of power consumptionand processing delay with respect to the processing loadTheapplication of this control technique gives rise to quadraticdependence of the power-delay product on the load whichwill be exploited in our subsequent game theoretic resourceallocation problem Unlike the cited works our solution con-siders as load the requests rate of services that the SP mustprocess However apart from this difference the general con-siderations described are valid here too

Specifically the dynamic frequency-dependent part ofthe processor power consumption is proportional to theclock frequency ] and to the square of the supply voltage119881 [29] However if DVFS is used there is proportionalitybetween the frequency and the voltage raised to some power120574 0 lt 120574 le 1 [30] Moreover the power scaling effectinduced by alternating active-idle periods in the queueingsystem served by the physical network node can be accountedfor by multiplying the dynamic power consumption by anincreasing concave function of the resource utilization of theform (119891120583)

1120572 where 120572 ge 1 is a parameter and 119891 and 120583 arethe workload (processing requests per unit time) and the taskprocessing speed respectively By taking 120574 = 1 (which impliesa cubic relationship in the operating frequency) the overalldependence of the power consumption Φ on the processingspeed (which is proportional to the operating frequency) canbe expressed as [9]

Φ (120583) = 1198701205833(119891

120583)

1120572

(1)

where 119870 gt 0 is a proportionality constant This expressionwill be exploited in the optimization problems to be definedand studied in the next section

4 System Model of CoordinatedNetwork Management Optimization andPlayersrsquo Game

We consider the situation shown in Figure 1 There are 119878

VNFs that are represented by the incoming workload rates1205821 120582

119878 competing for 119873 physical network nodes The

workload to the 119895th node is given by

119891119895=

119878

sum

119894=1

119891119894

119895 (2)

where 119891119894

119895is VNF 119894rsquos contribution The total workload vector

of player 119894 is f 119894 = col[1198911198941 119891

119894

119873] and obviously

119873

sum

119895=1

119891119894

119895= 120582119894 (3)

4 Journal of Electrical and Computer Engineering

S

1 1

22

N

VNFs Physical Node

Resource

Allocator

f1 =

S

sumi=1

fi1

1205821

1205822

120582S

fN =

S

sumi=1

fiN

Figure 1 System model for VNFs resource allocation

The computational resources have processing rates 120583119895

119895 = 1 119873 and we associate a cost function with each onenamely

119888119895(120583119895) =

119870119895(119891119895)1120572119895

(120583119895)3minus(1120572119895)

120583119895minus 119891119895

(4)

Cost functions of the form shown in (4) have been intro-duced in [9] and represent the product of power and process-ing delay under 1198721198721 assumption on each nodersquos queueThey actually correspond to the average energy consumptionthat can be attributed to functionsrsquo handling (considering thatthe node will be active whenever there are queued functions)Despite the possible inaccuracy in the representation of thequeueing behavior the denominator of (4) reflects the penaltypaid for approaching the resource capacity which is a majoraspect in a model to be used for control purposes

We now consider two specific control problems whoseresulting resource allocations are interconnected Specificallywe assume the presence ofControl Policies (CPs) acting in thenetwork node whose aim is to dynamically adjust the pro-cessing rate in order to minimize cost functions of the formof (4) On the other hand the players representing the VNFsknowing the policy adopted by the CPs and the resultingform of their optimal costs implement their own strategiesto distribute their requests among the different resources

The simple control structure that we envisage actuallyrepresents a situation that may be of interest in a number ofdifferent operational settings We only mention two differentcases of current high relevance (i) a multitenant environ-ment where a number of ISPs are offered services by adatacenter operator (or by a telecom operator in the accessnetwork) on a number of virtual machines and (ii) a collec-tion of virtual environments created by a SP on behalf of itscustomers where services are activated to represent cus-tomersrsquo functionalities on their own private virtual LANs Itis worth noting that in both cases the nodesrsquo customers maybe unwilling to disclose their loads to one another whichjustifies the decentralized game optimization

41 Nodesrsquo Control Policies We consider two possible vari-ants

411 CP1 (Unconstrained) The direct minimization of (4)with respect to 120583

119895yields immediately

120583lowast

119895(119891119895) =

3120572119895minus 1

2120572119895minus 1

119891119895= 120599119895119891119895

119888lowast

119895(119891119895) = 119870

119895

(120599119895)3minus(1120572119895)

120599119895minus 1

(119891119895)2

= ℎ119895sdot (119891119895)2

(5)

We must note that we are considering a continuous solu-tion to the node capacity adjustment In practice the physicalresources allow a discrete set of working frequencies withcorresponding processing capacities This would also ensurethat the processing speed does not decrease below a lowerthreshold avoiding excessive delay in the case of low loadThe unconstrained problem is an idealized variant that wouldmake sense only when the load on the node is not too small

412 CP2 (Individual Constraints) Each node has a maxi-mum and a minimum processing capacity which are takenexplicitly into account (120583min

119895le 120583119895le 120583

max119895

119895 = 1 119873)

Then

120583lowast

119895(119891119895) =

120599119895119891119895 119891119895=

120583min119895

120599119895

le 119891119895le

120583max119895

120599119895

= 119891119895

120583min119895

0 lt 119891119895lt 119891119895

120583max119895

120583max119895

gt 119891119895gt 119891119895

(6)

Therefore

119888lowast

119895(119891119895)

=

ℎ119895sdot (119891119895)2

119891119895le 119891119895le 119891119895

119870119895

(119891119895)1120572119895

(120583max119895

)3minus(1120572119895)

120583max119895

minus 119891119895

120583max119895

gt 119891119895gt 119891119895

119870119895

(119891119895)1120572119895

(120583min119895

)3minus(1120572119895)

120583min119895

minus 119891119895

0 lt 119891119895lt 119891119895

(7)

In this case we assume explicitly thatsum119878119894=1

120582119894lt sum119873

119895=1120583max119895

42 Noncooperative Playersrsquo Optimization Given the optimalCPs which can be found in functional form we can then statethe following

421 Playersrsquo Problem Each VNF 119894 wants to minimize withrespect to its workload vector f 119894 = col[119891119894

1 119891

119894

119873] a weighted

Journal of Electrical and Computer Engineering 5

(over its workload distribution) sum of the resourcesrsquo costsgiven the flow distribution of the others

119891119894lowast

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

119869119894

= argmin1198911198941 119891119894119873sum119873119895=1 119891119894119895=120582119894

1

120582119894

119873

sum

119895=1

119891119894

119895119888lowast

119895(119891119894

119895) 119894 = 1 119878

(8)

In this formulation the playersrsquo problem is a noncooper-ative game of which we can seek a NE [31]

We examine the case of the application of CP1 first Thenthe cost of VNF 119894 is given by

119869119894=

1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119895)2

=1

120582119894

119873

sum

119895=1

119891119894

119895ℎ119895sdot (119891119894

119895+ 119891minus119894

119895)2

(9)

where 119891minus119894119895

= sum119896 =119894

119891119896

119895is the total flow from the other VNFs to

node 119895The cost in (9) is convex in 119891

119894

119895and belongs to a category

of cost functions previously investigated in the networkingliterature without specific reference to energy efficiency [3233] In particular it is in the family of functions considered in[33] for which the NE Point (NEP) existence and uniquenessconditions of Rosen [34] have been shown to holdThereforeour playersrsquo problem admits a unique NEP The latter can befound by considering the Karush-Kuhn-Tucker stationarityconditions with Lagrange multiplier 120585

119894

120585119894=

120597119869119894

120597119891119894

119896

119891119894

119896gt 0

120585119894le

120597119869119894

120597119891119894

119896

119891119894

119896= 0

119896 = 1 119873

(10)

from which for 119891119894119896gt 0

120582119894120585119894= ℎ119896(119891119894

119896+ 119891minus119894

119896)2

+ 2ℎ119896119891119894

119896(119891119894

119896+ 119891minus119894

119896)

119891119894

119896= minus

2

3119891minus119894

119896plusmn

1

3ℎ119896

radic(ℎ119896119891minus119894

119896)2

+ 3ℎ119896120582119894120585119894

(11)

Excluding the negative solution the nonzero components arethose with the smallest and equal partial derivatives that in asubsetM sube 1 119873 yield sum

119895isinM 119891119894

119895= 120582119894 that is

sum

119895isinM

[minus2

3119891minus119894

119895+ radic4 (ℎ

119895119891minus119894

119895)2

+ 3ℎ119895120582119894120585119894] = 120582

119894 (12)

from which 120585119894can be determined

As regards form (7) of the optimal node cost we can notethat if we are restricted to 119891

119895le 119891119895

le 119891119895 119895 = 1 119873

(a coupled convex constraint set) the conditions of Rosen stillhold If we allow the larger constraint set 0 lt 119891

119895lt 120583

max119895

119895 = 1 119873 the nodesrsquo optimal cost functions are nolonger of polynomial type However the composite functionis still continuously differentiable (see the Appendix)Then itwould (i) satisfy the conditions for a convex game (the overallfunction composed of the three parts is convex) whichguarantee the existence of a NEP [34 Th 1] and (ii) possessequivalent properties to functions of Type A as defined in[32] which lead to uniqueness of the NEP

5 Numerical Results

In order to evaluate our criterion we present numericalresults deriving from the application of the simplest ControlPolicy (CP1) which implies the solution of a completely quad-ratic problem (corresponding to costs as in (9) for the NEP)We compare the resulting allocations power consumptionand average delays with those stemming from the application(on non-energy-aware nodes) of the algorithm for the mini-mization of the pure delay function that was developed in [35Proposition 1] This algorithm is considered a valid referencepoint in order to provide good validation of our criterionThecorresponding cost of VNF 119894 is

119869119894

119879=

119873

sum

119895=1

119891119894

119895119879119895(119891119895) 119894 = 1 119878 (13)

where

119879119895(119891119895) =

1

120583max119895

minus 119891119895

119891119895lt 120583

max119895

infin 119891119895ge 120583

max119895

(14)

The total demand is assumed to be less than the totaloperating capacity as we have done with our CP2 in (7)

To make the comparison fair we have to make sure thatthe maximum operating capacities 120583max

119895 119895 = 1 119873 (that

are constant in (14)) are compatiblewith the quadratic behav-ior stemming from the application of CP1 To this aim let(LCP1)

119891119895denote the total inputworkload tonode 119895 correspond-

ing to the NEP obtained by our problem we then choose

120583max119895

= 120599119895

(LCP1)119891119895 119895 = 1 119873 (15)

We indicate with (119879)119891119895(119879)

119891119894

119895 119895 = 1 119873 119894 = 1 119878

the total node input workloads and those corresponding toVNF 119894 respectively produced by the NEP deriving from the

6 Journal of Electrical and Computer Engineering

minimization of costs in (13) The average delays for the flowof player 119894 under the two strategy profiles are obtained as

119863119894

119879=

1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120583max119895

minus (119879)119891119895

=1

120582119894

119873

sum

119895=1

(119879)119891119894

119895

120599119895(LCP1)119891

119895minus (119879)119891

119895

119863119894

LCP1 =1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

120599119895(LCP1)119891 minus (LCP1)119891

119895

=1

120582119894

119873

sum

119895=1

(LCP1)119891119894

119895

(120599119895minus 1) (LCP1)119891

119895

(16)

and the power consumption values are given by

(119879)119875119895= 119870119895(120583

max119895

)3

= 119870119895(120599119895

(LCP1)119891119895)3

(LCP1)119875119895= 119870119895(120599119895

(LCP1)119891119895)3

(

119891119895

120599119895(LCP1)119891

119895

)

1120572119895

= 119870119895(120599119895

(LCP1)119891119895)3

120599119895

minus1120572119895

(17)

Our data for the comparison are organized as followsWeconsider normalized input rates so that 120582

119894le 1 119894 = 1 119878

We generate randomly 119878 values corresponding to the VNFsrsquoinput rates from independent uniform distributions then

(1) we find the NEP of our quadratic game by choosingthe parameters 119870

119895 120572119895 119895 = 1 119873 also from

independent uniform distributions (with 1 le 119870119895le

10 2 le 120572119895le 3 resp)

(2) by using the workload profiles obtained we computethe values 120583max

119895 119895 = 1 119873 from (15) and find the

NEP corresponding to costs in (13) and (14) by usingthe same input rates

(3) we compute the corresponding average delays andpower consumption values for the two cases by using(16) and (17) respectively

Steps (1)ndash(3) are repeated with a new configuration ofrandom values of the input rates for a certain number oftimes then for both games we average the values of thedelays and power consumption per VNF and of the totaldelay and power consumption averaged over the VNFs overall trialsWe compare the results obtained in this way for fourdifferent settings of VNFs and nodes namely 119878 = 119873 = 3 119878 =

3 119873 = 5 119878 = 5 119873 = 3 119878 = 5 119873 = 5 The rationale ofrepeating the experiments with random configurations is toexplore a number of different possible settings and collect theresults in a synthetic form

For each VNFs-nodes setting 30 random input config-urations are generated to produce the final average valuesIn Figures 2ndash10 the labels EE and NEE refer to the energy-efficient case (quadratic cost) and to the non-energy-efficient

Node 1 Node 2 Node 3 Average

NEEEESaving ()

0

1

2

3

4

5

6

7

8

9

10

Pow

er co

nsum

ptio

n (W

)

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 2 Power consumption with 119878 = 119873 = 3

User 1 User 2 User 3 Average0

05

1

15

2

25

3

35

4

45

5D

elay

(ms)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 3 User (VNF) delays with 119878 = 119873 = 3

one (see (13) and (14)) respectively Instead the label ldquoSaving()rdquo refers to the saving (in percentage) of the EE case com-pared to the NEE one When it is negative the EE case pre-sents higher values than theNEE one Finally the label ldquoUserrdquorefers to the VNF

The results are reported in Figures 2ndash10 The modelimplementation and the relative results have been developedwith MATLABreg [36]

In general the energy-efficient optimization tends to saveenergy at the expense of a relatively small increase in delayIndeed as shown in the figures the saving is positive for theenergy consumption and negative for delay The energy sav-ing is always higher than 18 in every case while the delayincrease is always lower than 4

However it can be noted (by comparing eg Figures 5and 7) that configurationswith lower load are penalized in thedelayThis effect is caused by the behavior of the simple linearadaptation strategy whichmaintains constant utilization and

Journal of Electrical and Computer Engineering 7

1

2

3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

05

15

25

35Po

wer

cons

umpt

ion

(W)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)Figure 4 Power consumption with 119878 = 3 119873 = 5

User 1 User 2 User 3 Average0

1

2

3

4

5

6

7

8

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 5 User (VNF) delays with 119878 = 3 119873 = 5

Node 1 Node 2 Node 3 Average0

5

10

15

20

25

30

35

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0102030405060708090100

Savi

ng (

)

Figure 6 Power consumption with 119878 = 5 119873 = 3

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100

Savi

ng (

)

Figure 7 User (VNF) delays with 119878 = 5 119873 = 3

Node 1 Node 2 Node 3 Node 4 Node 5 Average0

2

4

6

8

10

12

14

Pow

er co

nsum

ptio

n (W

)

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 8 Power consumption with 119878 = 5 119873 = 5

User 1 User 2 User 3 User 4 User 5 Average0

05

1

15

2

25

3

35

4

45

5

Del

ay (m

s)

NEEEESaving ()

minus0

minus10

minus20

minus30

minus40

minus50

minus60

minus70

minus80

minus90

minus100Sa

ving

()

Figure 9 User (VNF) delays with 119878 = 5 119873 = 5

8 Journal of Electrical and Computer Engineering

0

10

20

30

40

50

60

70Po

wer

(W)times

delay

(ms)

N = 3S = 3

N = 5S = 3

N = 3S = 5

N = 5S = 5

NEEEESaving ()

0

10

20

30

40

50

60

70

80

90

100

Savi

ng (

)

Figure 10 Power consumption and delay product of the variouscases

highlights the relevance of setting not only maximum butalso minimum values for the processing capacity (or for thenode input workloads)

Another consideration arising from the results concernsthe variation of the energy consumption by changing thenumber of players andor the nodes For example Figure 6(case 119878 = 5 119873 = 3) shows total power consumptionsignificantly higher than the one shown in Figure 8 (case119878 = 5 119873 = 5) The reasons for this difference are due tothe lower workload managed by the nodes of the case 119878 =

5 119873 = 5 (greater number of nodes with an equal number ofusers) making it possible to exploit the AR mechanisms withsignificant energy saving

Finally Figure 10 shows the product of the power con-sumption and the delay for each studied case As described inthe previous sections our criterion aims at minimizing thisproduct As can be easily seen the saving resulting from theapplication of our policy is always above 10

6 Conclusions

We have examined the possible effect of introducing simpleenergy optimization strategies in physical network nodes

performing services for a set of VNFs that request their com-putational power within an NFV environment The VNFsrsquostrategy profiles for resource sharing have been obtained fromthe solution of a noncooperative game where the playersare aware of the adaptive behavior of the resources andaim at minimizing costs that reflect this behavior We havecompared the energy consumption and processing delay withthose stemming from a game played with non-energy-awarenodes and VNFs The results show that the effect on delayis almost negligible whereas significant power saving can beobtained In particular the energy saving is always higherthan 18 in every case with a delay increase always lowerthan 4

From another point of view it would also be of interestto investigate socially optimal solutions stemming from aNash Bargaining Problem (NBP) along the lines of [37] bydefining suitable utility functions for the players Anotherinteresting evolution of this work could be the application ofadditional constraints to the proposed solution such as forexample the maximum delay that a flow can support

Appendix

We check the maintenance of continuous differentiability atthe point 119891

119895= 120583

max119895

120599119895 By noting that the derivative of the

cost function of player 119894 with respect to 119891119894

119895has the form

120597119869119894

120597119891119894

119895

= 119888lowast

119895(119891119895) + 119891119894

119895119888lowast1015840

119895(119891119895) (A1)

it is immediate to note that we need to compare

120597

120597119891119894

119895

119870119895

(119891119894

119895+ 119891minus119894

119895)1120572119895

(120583max119895

)3minus1120572119895

120583max119895

minus 119891119894

119895minus 119891minus119894

119895

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

120597

120597119891119894

119895

ℎ119895sdot (119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816119891119894119895=120583

max119895120599119895minus119891

minus119894119895

(A2)

We have

119870119895

(1120572119895) (119891119895)1120572119895minus1

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895) + (119891

119895)1120572119895

(120583max119895

)3minus1120572119895

(120583max119895

minus 119891119895)2

1003816100381610038161003816100381610038161003816100381610038161003816100381610038161003816119891119895=120583max119895120599119895

=

119870119895(120583

max119895

)3minus1120572119895

(120583max119895

120599119895)1120572119895

[(1120572119895) (120583

max119895

120599119895)minus1

(120583max119895

minus 120583max119895

120599119895) + 1]

(120583max119895

minus 120583max119895

120599119895)2

Journal of Electrical and Computer Engineering 9

=

119870119895(120583

max119895

)3

(120599119895)minus1120572119895

[(1120572119895) 120599119895(1 minus 1120599

119895) + 1]

(120583max119895

)2

(1 minus 1120599119895)2

=

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

2ℎ119895119891119895

10038161003816100381610038161003816119891119895=120583max119895120599119895

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(A3)

Then let us check when

119870119895120583max119895

(120599119895)minus1120572119895

(120599119895120572119895minus 1120572

119895+ 1)

((120599119895minus 1) 120599

119895)2

= 2119870119895

(120599119895)3minus1120572119895

120599119895minus 1

sdot

120583max119895

120599119895

(120599119895120572119895minus 1120572

119895+ 1) sdot (120599

119895)2

120599119895minus 1

= 2 (120599119895)2

120599119895

120572119895

minus1

120572119895

+ 1 = 2120599119895minus 2

(1

120572119895

minus 2)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

(

1 minus 2120572119895

120572119895

)

3120572119895minus 1

2120572119895minus 1

minus1

120572119895

+ 3 = 0

1 minus 3120572119895

120572119895

minus1

120572119895

+ 3 = 0

1 minus 3120572119895minus 1 + 3120572

119895= 0

(A4)

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was partially supported by the H2020 EuropeanProjectsARCADIA (httpwwwarcadia-frameworkeu) andINPUT (httpwwwinput-projecteu) funded by the Euro-pean Commission

References

[1] C Bianco F Cucchietti and G Griffa ldquoEnergy consumptiontrends in the next generation access networkmdasha telco perspec-tiverdquo in Proceedings of the 29th International TelecommunicationEnergy Conference (INTELEC rsquo07) pp 737ndash742 Rome ItalySeptember-October 2007

[2] GeSI SMARTer 2020 The Role of ICT in Driving a SustainableFuture December 2012 httpgesiorgportfolioreport72

[3] A Manzalini ldquoFuture edge ICT networksrdquo IEEE COMSOCMMTC E-Letter vol 7 no 7 pp 1ndash4 2012

[4] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys amp Tutorials vol 16 no 3 pp 1617ndash1634 2014

[5] ldquoNetwork functions virtualisationmdashintroductory white paperrdquoinProceedings of the SDNandOpenFlowWorld Congress Darm-stadt Germany October 2012

[6] R Bolla R Bruschi F Davoli and C Lombardo ldquoFine-grainedenergy-efficient consolidation in SDN networks and devicesrdquoIEEE Transactions on Network and Service Management vol 12no 2 pp 132ndash145 2015

[7] R Bolla R Bruschi F Davoli and F Cucchietti ldquoEnergyefficiency in the future internet a survey of existing approachesand trends in energy-aware fixednetwork infrastructuresrdquo IEEECommunications Surveys amp Tutorials vol 13 no 2 pp 223ndash2442011

[8] I Menache and A Ozdaglar Network Games Theory Modelsand Dynamics Morgan amp Claypool Publishers San RafaelCalif USA 2011

[9] R Bolla R Bruschi F Davoli and P Lago ldquoOptimizingthe power-delay product in energy-aware packet forwardingenginesrdquo in Proceedings of the 24th Tyrrhenian InternationalWorkshop on Digital CommunicationsmdashGreen ICT (TIWDCrsquo13) pp 1ndash5 IEEE Genoa Italy September 2013

[10] A Khan A Zugenmaier D Jurca and W Kellerer ldquoNetworkvirtualization a hypervisor for the internetrdquo IEEE Communi-cations Magazine vol 50 no 1 pp 136ndash143 2012

[11] Q Duan Y Yan and A V Vasilakos ldquoA survey on service-oriented network virtualization toward convergence of net-working and cloud computingrdquo IEEE Transactions on Networkand Service Management vol 9 no 4 pp 373ndash392 2012

[12] G M Roy S K Saurabh N M Upadhyay and P K GuptaldquoCreation of virtual node virtual link and managing them innetwork virtualizationrdquo in Proceedings of the World Congress onInformation and Communication Technologies (WICT rsquo11) pp738ndash742 IEEE Mumbai India December 2011

[13] A Manzalini R Saracco C Buyukkoc et al ldquoSoftware-definednetworks or future networks and servicesmdashmain technicalchallenges and business implicationsrdquo inProceedings of the IEEEWorkshop SDN4FNS Trento Italy November 2013

[14] M Yu Y Yi J Rexford and M Chiang ldquoRethinking vir-tual network embedding substrate support for path splittingand migrationrdquo ACM SIGCOMM Computer CommunicationReview vol 38 no 2 pp 17ndash19 2008

[15] A Jarray and A Karmouch ldquoDecomposition approaches forvirtual network embedding with one-shot node and link map-pingrdquo IEEEACM Transactions on Networking vol 23 no 3 pp1012ndash1025 2015

10 Journal of Electrical and Computer Engineering

[16] N M M K Chowdhury M R Rahman and R BoutabaldquoVirtual network embedding with coordinated node and linkmappingrdquo in Proceedings of the 28th Conference on ComputerCommunications (IEEE INFOCOM rsquo09) pp 783ndash791 Rio deJaneiro Brazil April 2009

[17] X Cheng S Su Z Zhang et al ldquoVirtual network embeddingthrough topology-aware node rankingrdquoACMSIGCOMMCom-puter Communication Review vol 41 no 2 pp 38ndash47 2011

[18] J F Botero X Hesselbach M Duelli D Schlosser A Fischerand H De Meer ldquoEnergy efficient virtual network embeddingrdquoIEEE Communications Letters vol 16 no 5 pp 756ndash759 2012

[19] A Leivadeas C Papagianni and S Papavassiliou ldquoEfficientresource mapping framework over networked clouds via iter-ated local search-based request partitioningrdquo IEEETransactionson Parallel andDistributed Systems vol 24 no 6 pp 1077ndash10862013

[20] A Fischer J F Botero M T Beck H de Meer and XHesselbach ldquoVirtual network embedding a surveyrdquo IEEE Com-munications Surveys amp Tutorials vol 15 no 4 pp 1888ndash19062013

[21] M Scholler M Stiemerling A Ripke and R Bless ldquoResilientdeployment of virtual network functionsrdquo in Proceedings of the5th International Congress onUltraModern Telecommunicationsand Control Systems and Workshops (ICUMT rsquo13) pp 208ndash214Almaty Kazakhstan September 2013

[22] A Jarray and A Karmouch ldquoVCG auction-based approachfor efficient Virtual Network embeddingrdquo in Proceedings ofthe IFIPIEEE International Symposium on Integrated NetworkManagement (IM rsquo13) pp 609ndash615 Ghent Belbium May 2013

[23] A Gember-Jacobson R Viswanathan C Prakash et alldquoOpenNF enabling innovation in network function controlrdquoin Proceedings of the ACM Conference on Special Interest Groupon Data Communication (SIGCOMM rsquo14) pp 163ndash174 ACMChicago Ill USA August 2014

[24] J Antoniou and A Pitsillides Game Theory in CommunicationNetworks Cooperative Resolution of Interactive Networking Sce-narios CRC Press Boca Raton Fla USA 2013

[25] M S Seddiki M Frikha and Y-Q Song ldquoA non-cooperativegame-theoretic framework for resource allocation in networkvirtualizationrdquo Telecommunication Systems vol 61 no 2 pp209ndash219 2016

[26] L PavelGameTheory for Control of Optical Networks SpringerNew York NY USA 2012

[27] E Altman T Boulogne R El-Azouzi T Jimenez and L Wyn-ter ldquoA survey on networking games in telecommunicationsrdquoComputers amp Operations Research vol 33 no 2 pp 286ndash3112006

[28] S Nedevschi L Popa G Iannaccone S Ratnasamy and DWetherall ldquoReducing network energy consumption via sleepingand rate-adaptationrdquo in Proceedings of the 5th USENIX Sympo-sium on Networked Systems Design and Implementation (NSDIrsquo08) pp 323ndash336 San Francisco Calif USA April 2008

[29] S Henzler Power Management of Digital Circuits in DeepSub-Micron CMOS Technologies vol 25 of Springer Series inAdvanced Microelectronics Springer Dordrecht The Nether-lands 2007

[30] K Li ldquoScheduling parallel tasks on multiprocessor computerswith efficient power managementrdquo in Proceedings of the IEEEInternational Symposium on Parallel amp Distributed ProcessingWorkshops and PhD Forum (IPDPSW rsquo10) pp 1ndash8 IEEEAtlanta Ga USA April 2010

[31] T Basar Lecture Notes on Non-Cooperative Game TheoryHamilton Institute Kildare Ireland 2010 httpwwwhamiltonieollieDownloadsGamepdf

[32] A Orda R Rom and N Shimkin ldquoCompetitive routing inmultiuser communication networksrdquo IEEEACM Transactionson Networking vol 1 no 5 pp 510ndash521 1993

[33] E Altman T Basar T Jimenez and N Shimkin ldquoCompetitiverouting in networks with polynomial costsrdquo IEEE Transactionson Automatic Control vol 47 no 1 pp 92ndash96 2002

[34] J B Rosen ldquoExistence and uniqueness of equilibrium points forconcave N-person gamesrdquo Econometrica vol 33 no 3 pp 520ndash534 1965

[35] Y A Korilis A A Lazar and A Orda ldquoArchitecting noncoop-erative networksrdquo IEEE Journal on Selected Areas in Communi-cations vol 13 no 7 pp 1241ndash1251 1995

[36] MATLABregThe Language of Technical Computing httpwwwmathworkscomproductsmatlab

[37] H Yaıche R R Mazumdar and C Rosenberg ldquoA gametheoretic framework for bandwidth allocation and pricing inbroadband networksrdquo IEEEACM Transactions on Networkingvol 8 no 5 pp 667ndash678 2000

Research ArticleA Processor-Sharing Scheduling Strategy for NFV Nodes

Giuseppe Faraci Alfio Lombardo and Giovanni Schembra

Dipartimento di Ingegneria Elettrica Elettronica e Informatica (DIEEI) University of Catania 95123 Catania Italy

Correspondence should be addressed to Giovanni Schembra schembradieeiunictit

Received 2 November 2015 Accepted 12 January 2016

Academic Editor Xavier Hesselbach

Copyright copy 2016 Giuseppe Faraci et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

The introduction of the two paradigms SDN and NFV to ldquosoftwarizerdquo the current Internet is making management and resourceallocation two key challenges in the evolution towards the Future Internet In this context this paper proposes Network-AwareRound Robin (NARR) a processor-sharing strategy to reduce delays in traversing SDNNFV nodes The application of NARRalleviates the job of the Orchestrator by automatically working at the intranode level dynamically assigning the processor slices tothe virtual network functions (VNFs) according to the state of the queues associated with the output links of the network interfacecards (NICs) An extensive simulation set is presented to show the improvements achieved with respect to two more processor-sharing strategies chosen as reference

1 Introduction

In the last few years the diffusion of new complex and efficientdistributed services in the Internet is becoming increasinglydifficult because of the ossification of the Internet protocolsthe proprietary nature of existing hardware appliances thecosts and the lack of skilled professionals formaintaining andupgrading them

In order to alleviate these problems two new networkparadigms SoftwareDefinedNetworks (SDN) [1ndash5] andNet-work Functions Virtualization (NFV) [6 7] have beenrecently proposed with the specific target of improving theflexibility of network service provisioning and reducing thetime to market of new services

SDN is an emerging architecture that aims at making thenetwork dynamic manageable and cost-effective by decou-pling the system that makes decisions about where trafficis sent (the control plane) from the underlying system thatforwards traffic to the selected destination (the data plane) Inthis way the network control becomes directly programmableand the underlying infrastructure is abstracted for applica-tions and network services

NFV is a core structural change in the way telecommuni-cation infrastructure is deployed The NFV initiative startedin late 2012 by some of the biggest telecommunications ser-vice providers which formed an Industry Specification

Group (ISG) within the European Telecommunications Stan-dards Institute (ETSI) The interest has grown involvingtoday more than 28 network operators and over 150 technol-ogy providers from across the telecommunications industry[7] The NFV paradigm leverages on virtualization technolo-gies and commercial off-the-shelf programmable hardwaresuch as general-purpose servers storage and switches withthe final target of decoupling the software implementation ofnetwork functions from the underlying hardware

The coexistence and the interaction of both NFV andSDN paradigms is giving to the network operators the pos-sibility of achieving greater agility and acceleration in newservice deployments with a consequent considerable reduc-tion of both Capital Expenditure (CAPEX) and OperationalExpenditure (OPEX) [8]

One of the main challenging problems in deploying anSDNNFV network is an efficient design of resource alloca-tion and management functions that are in charge of thenetwork Orchestrator Although this task is well covered indata center and cloud scenarios [9 10] it is currently a chal-lenging problem in a geographic networkwhere transmissiondelays cannot be neglected and transmission capacities ofthe interconnection links are not comparable with the caseof above scenarios This is the reason why the problem oforchestrating an SDNNFV network is still open and attractsa lot of research interest from both academia and industry

Hindawi Publishing CorporationJournal of Electrical and Computer EngineeringVolume 2016 Article ID 3583962 10 pageshttpdxdoiorg10115520163583962

2 Journal of Electrical and Computer Engineering

More specifically the whole problem of orchestrating anSDNNFV network is very complex because it involves adesign work at both internode and intranode levels [11 12]At the internode level and in all the cases where the timebetween each execution is significantly greater than the timeto collect compute and disseminate results the applicationof a centralized approach is practicable [13 14] In these casesin fact taking into consideration both traffic characterizationand required level of quality of service (QoS) the Orches-trator is able to decide in a centralized manner how manyinstances of the same function have to be run simultaneouslythe network nodes that have to execute them and the routingstrategies that allow traffic flows to cross the nodes where therequested network functions are running

Instead centralizing a strategy dealing with operationsthat require dynamic reconfiguration of network resourcesis actually unfeasible to be executed by the Orchestrator forproblems of resilience and scalability Therefore adaptivemanagement operations with short timescales require a dis-tributed approach The authors of [15] propose a frameworkto support adaptive resource management operations whichinvolve short timescale reconfiguration of network resourcesshowing how requirements in terms of load-balancing andenergy management [16ndash18] can be satisfied Another workin the same direction is [19] that proposes a solution for theconsolidation of VMs on local computing resources exclu-sively based on local information The work in [20] aims atalleviating the inter-VM network latency defining a hyper-visor scheduler algorithm that is able to take into consider-ation the allocation of the resources within a consolidatedenvironment scheduling VMs to reduce their waiting latencyin the run queue Another approach is applied in [21] whichintroduces a policy to manage the internal onoff switchingof virtual network functions (VNFs) in NFV-compliantCustomer Premises Equipment (CPE) devices

The focus of this paper is on another fundamental prob-lem that is inherent to resource allocation within an SDNNFV node that is the decision of the percentage of CPUto be assigned to each VNF If at a first glance this couldshow a classical problem of processor sharing that has beenwidely explored in the past literature [15 22 23] actually itis much more complex because performance can be stronglyimproved by leveraging on the correlation with the outputqueues associated with the network interface cards (NICs)

With all this in mind the main contribution of this paperis the definition of a processor-sharing policy in the followingreferred to asNetwork-Aware Round Robin (NARR) which isspecific for SDNNFVnodes Starting from the considerationthat packets that have received the service of a networkfunction from a virtual machine (VM) running on a givennode are enqueued to wait for transmission through a givenNIC the proposed strategy dynamically changes the slicesof the CPU assigned to each VNF according to the state ofthe output NIC queues More specifically the NARR strategygives a larger CPU slice to serve packets that will leave thenode through the NIC that is currently less loaded in such away as to minimize wastes of the NIC output link capacitiesalso minimizing the overall delay experienced by packetstraversing nodes that implement NARR

As a side contribution the paper calculates an on-offmodel for the traffic leaving the SDNNFV node on each NICoutput linkThismodel can be used as a building block for thedesign and performance evaluation of an entire network

The paper is structured as follows Section 2 describes thenode architecture Section 3 introduces the NARR strategySection 4 presents a case study and shows some numericalresults Finally Section 5 draws some conclusions and dis-cusses some future work

2 System Description

The target of this section is the description of the system weconsider in the rest of the paper It is an SDNNFV node asthe one considered in [11 12] where we will apply the NARRstrategy Its architecture shown in Figure 1(a) is compliantwith the ETSI Specifications [24] It is composed of three dif-ferent domains namely theCompute domain theHypervisordomain and the Infrastructure Network domain The Com-pute domain provides the computational and storage hard-ware resources that allow the node to host the VNFs Thanksto the computing and storage virtualization provided by theHypervisor domain a VM can be created migrated fromone node to another one and halted in order to optimize thedeployment according to specific performance parametersCommunications among theVMs and between theVMs andthe external environment are provided by the InfrastructureNetwork domain

The SDNNFVnode is remotely controlled by theOrches-trator whose architecture is shown in Figure 1(b) It is con-stituted by three main blocks The Orchestration Engine exe-cutes all the algorithms and the strategies to manage andorchestrate the whole network After each decision theOrchestration Engine requests that the NFV Coordinator in-stantiates migrates or halts VMs and consequently requeststhat the SDN Controller modifies the flow tables of the SDNswitches in the network in such a way that traffic flows cantraverse VMs hosting the requested VNFs

With this in mind a functional architecture of the NFVnode is represented in Figure 2 Its main components are theProcessor whichmanages the Compute domain and hosts theHypervisor domain and the Network Card Interfaces (NICs)with their queues which constitute the ldquoNetwork Hardwarerdquoblock in Figure 1

Let119872 be the number of virtual network functions (VNFs)that are running in the node and let 119871 be the number ofoutput NICs In order to simplify notation in the followingwe will assume that all the NICs have the same characteristicsin terms of buffer capacity and output rate So let 119870(NIC) bethe size of the queue associated with each NIC that is themaximum number of packets that each queue can containand let 120583(NIC) be the transmission rate of the output link asso-ciated with each NIC expressed in bits

The Flow Distributor block has the task of routing eachentering flow towards the function required by the flowIt is a software SDN switch that can be implemented forexample with OpenvSwitch [25] It routes the flows to theVMs running the requested VNFs according to the controlmessages received by the SDN Controller residing in the

Journal of Electrical and Computer Engineering 3

Hypervisordomain

Virtualcomputing

Virtualstorage Virtual network

Computing hardware

Storage hardware Network hardware

NFV node

Virtualization layer

InfrastructureNetwork domain

Computedomain

NIC NIC NIC

VNF VNF VNFmiddot middot middot

(a) NFV node

Orchestrationengine

NFV coordinator

SDN controller

NFVI nodes

(b) Orchestrator

Figure 1 Network function virtualization infrastructurePr

oces

sor

Flow

dist

ribut

or

Processor Arbiter

F1

FM

120583(F)[1]

120583(F)[M]

N1

NL

120583(N)

120583(N)

Figure 2 NFV node functional architecture

Orchestrator The most common protocol that can be usedfor the communications between the SDNController and theOrchestrator is OpenFlow [26]

Let 120583(119875) be the total processing rate of the processorexpressed in packetssThis rate is shared among all the activefunctions according to a processor rate scheduling strategyLet 120583(119865) be the array whose generic element 120583(119865)

[119898] with 119898 isin

1 119872 is the portion of the processor rate assigned to theVM implementing the function 119865119898 Of course we have

119872

sum

119898=1

120583(119865)

[119898]= 120583(119875) (1)

Once a packet has been served by the required functionit is sent to one of the NICs to exit from the node If theNIC is transmitting another packet the arriving packets areenqueued in the NIC queue We will indicate the queue asso-ciated with the generic NIC 119897 as 119876(NIC)

119897

In order to implement the NARR strategy proposed inthis paper we realize the block relative to each functionwith aset of 119871 parallel queues in the following referred to as inside-function queues as shown in Figure 3 for the generic function119865119898 The generic 119897th inside-function queue of the function 119865119898indicated as119876119898119897 in Figure 3 is used to enqueue packets thatafter receiving the service of the function 119865119898 will leave thenode through the NIC 119897 Let 119870(119865)Ins be the size of each inside-function queue Each inside-function queue of the genericfunction 119865119898 receives a portion of the processor rate assignedto that function Let 120583(119865Ins)

[119898119897]be the portion of the processor rate

assigned to the queue 119876119898119895 of the function 119865119898 Of course wehave

119871

sum

119897=1

120583(119865Ins)

[119898119897]= 120583(119865)

[119898] (2)

4 Journal of Electrical and Computer Engineering

Qm1

Qml

QmL

Fm

120583(FInt)[m1]

120583(FInt)

(FInt)

[ml]

120583[mL]

Figure 3 Block diagram of the generic function 119865119898

The portion of processor rate associated with each inside-function queue is dynamically changed by the Processor Arbi-ter according to the NARR strategy described in Section 3

3 NARR Processor-Sharing Strategy

The NARR (Network-Aware Round Robin) processor-shar-ing strategy observes the state of both the inside-functionqueues and the NIC queues with the goal of reducing as faras possible the inactivity periods of the NIC output links Asalready introduced in the Introduction its definition startsfrom the fact that packets that have received the serviceof a VNF are enqueued to wait for transmission through agiven NIC So in order to avoid output link capacity wasteNARR dynamically changes the slices of the CPU assigned toeach VNF and in particular to each inside-function queueaccording to the state of the output NIC queues assigninglarger CPU slices to serve packets that will leave the nodethrough less-loaded NICs

More specifically the Processor Arbiter decides the pro-cessor rate portions according to two different steps

Step 1 (assignment of the processor rate portion to the aggre-gation of queues whose output is a specific NIC) This stepmeets the target of the proposed strategy that is to reduceas much as possible underutilization of the NIC output linksand as a consequence delays in the relative queues To thispurpose let us consider a virtual queue that contains all thepackets that are stored in all the inside-function queues 119876119898119897for each119898 isin [1119872] that is all the packets that will leave thenode through the NIC 119897 Let us indicate this virtual queue as119876(rarrNIC119897)Aggr and its service rate as 120583(rarrNIC119897)

Aggr Of course we have

120583(rarrNIC119897)Aggr =

119872

sum

119898=1

120583(119865Ins)

[119898119897] (3)

With this in mind the idea is to give a higher processorslice to the inside-function queues whose flows are directedto the NICs that are emptying

Taking into account the goal of privileging the flows thatwill leave the node through underloaded NICs the ProcessorArbiter calculates 120583(rarrNIC119897)

Aggr as follows

120583(rarrNIC119897)Aggr =

119902ref minus 119878(NIC)119897

sum119871

119895=1 119902ref minus 119878(NIC)119895

120583(119875) (4)

where 119878(NIC)119897

represents the state of the queue associated withthe NIC 119897 while 119902ref is defined as follows

119902ref = min120572 sdotmax119895(119878(NIC)119895 ) 119870

(NIC) (5)

The term 119902ref is a reference target value calculated fromthe state of the NIC queue that has the highest lengthamplified with a coefficient 120572 and truncated to themaximumqueue size 119870(NIC) It is determined in such a way that if weconsider 120572 = 1 the NIC queue that has the highest lengthdoes not receive packets from the inside-function queuesbecause the service rate of them is set to zero the other queuesreceive packets with a rate that is proportional to the distancebetween their length and the length of the most overloadedNIC queueHowever through an extensive set of simulationswe deduced that setting 120572 = 1 causes bad performancebecause there is always a group of inside-function queuesthat are not served Instead all the 120572 values in the interval]1 2] give almost equivalent performance For this reasonin the numerical analysis presented in Section 4 we have set120572 = 12

Step 2 (assignment of the processor rate portion to eachinside-function queue) Let us consider the generic 119897thinside-function queue of the function 119865119898 that is the queue119876119898119897 Its service rate is calculated as being proportional to thecurrent state of this queue in comparison with the other 119897thqueues of the other functions To this purpose let us indicatethe state of the virtual queue119876(rarrNIC119897)

Aggr as 119878(rarrNIC119897)Aggr Of course

it can be calculated as the sum of the states of all the inside-function queues 119876119898119897 for each119898 isin [1119872] that is

119878(rarrNIC119897)Aggr =

119872

sum

119896=1

119878(119865Ins)

119896119897 (6)

So the service rate of the inside-function queue 119876119898119897 isdetermined as a fraction of the service rate assigned at thefirst step to the virtual queue 119876(rarrNIC119897)

Aggr 120583(rarrNIC119897)Aggr as follows

120583(119865Ins)

[119898119897]=

119878(119865Ins)

119898119897

119878(rarrNIC119897)Aggr

120583(rarrNIC119897)Aggr (7)

Of course if at any time an inside-function queue remainsempty the processor rate portion assigned to it will be sharedamong the other queues proportionally to the processorportions previously assigned Likewise if at some instant anempty queue receives a new packet the previous processorrate portion is reassigned to that queue

Journal of Electrical and Computer Engineering 5

4 Case Study

In this sectionwe present a numerical analysis of the behaviorof an SDNNFV node that applies the proposed NARRprocessor-sharing strategy with the target of evaluating theachieved performance To this purpose we will considertwo other processor-sharing strategies as reference in thefollowing referred to as round robin (RR) and queue-lengthweighted round robin (QLWRR) In both the reference casesthe node has the same 119871 NIC queues but it has only 119872

processor queues one for each function Each of the 119872

queues has a size of 119870(119865) = 119871 sdot 119870(119865)

Ins where 119870(119865)

Ins representsthe size of each internal function queue already defined sofar for the proposed strategy

TheRR strategy applies the classical round robin schedul-ing policy to serve the 119872 function queues that is it serveseach function queue with a rate 120583(119865)RR = 120583

(119875)119872

The QLWRR strategy on the other hand serves eachfunction queue with a rate that is proportional to the queuelength that is

120583(119865119898)

QLWRR =119878(119865119898)

sum119872

119896=1 119878(119865119896)

120583(119875) (8)

where 119878(119865119898) is the state of the queue associated with the func-tion 119865119898

41 Parameter Settings In this numerical analysis we con-sider a node with 119872 = 4 VNFs and 119871 = 3 output NICsWe loaded the node with a balanced traffic constituted by119873119865 = 12 flows each characterized by a different 2-uple119903119890119902119906119890119904119905119890119889 119873119865119881 119900119906119905119901119906119905 119873119868119862 Each flow has been gener-ated with an on-off model characterized by exponentiallydistributed on and off periods When a flow is in off stateno packets arrive to the node from it instead when it is inon state packets arrive with an average rate 120582ON Each packetis assumed with an exponential distributed size with a meanof 1 kbyte In all the simulations we have considered the sameon-off cycle duration 120575 = 119879OFF + 119879ON = 5msec while wehave varied the burstiness Burstiness of on-off sources isdefined as

119887 =120582ON120582Mean

(9)

where 120582Mean is the mean emission rate Now indicating theprobability of the ON state as 120587ON and taking into accountthat 120582Mean = 120582ON sdot 120587ON and 120587ON = 119879ON(119879OFF + 119879ON) wehave

119887 =119879OFF + 119879ON

119879ON (10)

In our analysis the burstiness has been varied in the range[2 22] Consequently we have derived the mean durations ofthe off and on periods as follows

119879ON =120575

119887

119879OFF = 120575 minus 119879ON

(11)

Finally in order to maintain the same mean packet ratefor different values of119879OFF and119879ON we have assumed that 75packets are transmitted on average that is 120582ON = 75119879ONThe resulting mean emission rate is 120582Mean = 1229Mbits

As far as the NICs are concerned we have considered anoutput rate of 120583(NIC)

= 980Mbits so having a utilizationcoefficient on each NIC of 120588(NIC)

= 05 and a queue size119870(NIC)

= 3000 packets Instead regarding the processor weconsidered a size of each inside-function queue of 119870(119865)Ins =

3000 packets Finally we have analyzed two different proces-sor cases In the first case we considered a processor that isable to process120583(119875) = 306 kpacketss while in the second casewe assumed a processor rate of 120583(119875) = 204 kpacketss There-fore defining the processor utilization coefficient as follows

120588(119875)

=119873119865 sdot 120582Mean

120583(119875)(12)

with119873119865 sdot 120582Mean being the total mean arrival rate to the nodethe two considered cases are characterized by a processor uti-lization coefficient of 120588(119875)Low = 06 and 120588

(119875)

High = 09 respectively

42 Numerical Results In this section we present someresults achieved by discrete-event simulations The simu-lation tool used in the paper is publicly available in [27]We first present a temporal analysis of the main variablescharacterizing the SDNNFV node and then we show aperformance comparison ofNARRwith the two strategies RRand QLWRR taken as a reference

For the temporal analysis we focus on a short timeinterval of 720 120583sec in order to be able to clearly highlightthe evolution of the considered processes We have loadedthe node with an on-off traffic like the one described inSection 41 with a burstiness 119887 = 7

Figures 4 5 and 6 show the time evolution of the lengthof the queues associated with the NICs the processor sliceassigned to each virtual queue loading each NIC and thelength of the same virtual queues We can subdivide theconsidered time interval into three different periods

In the first period ranging from the instants 01063 and01067 from Figure 4 we can notice that the NIC queue119876(NIC)

1

has a greater length than the queue 119876(NIC)3 while 119876(NIC)

2 isempty For this reason in this period as shown in Figure 5the processor is shared between119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr and

the slice assigned to serve 119876(rarrNIC3)Aggr is higher in such a way

that the two queue lengths 119876(NIC)1 and 119876(NIC)

3 reach the samevalue situation that happens at the end of the first periodaround the instant 01067 During this period the behaviorof both the virtual queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr in Figure 6

remains flat showing the fact that the received processor ratesare able to balance the arrival rates

At the beginning of the second period that rangesbetween the instants 01067 and 010693 the processor rateassigned to the virtual queue 119876(rarrNIC3)

Aggr has become not suffi-cient to serve the amount of arriving traffic and so the virtualqueue 119876(rarrNIC3)

Aggr increases its length as shown in Figure 6

6 Journal of Electrical and Computer EngineeringQ

ueue

leng

th (p

acke

ts)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

10

20

30

40

50

60

70

S(NIC)1

S(NIC)2

S(NIC)3

S(NIC)3

Figure 4 Lenght evolution of the NIC queues

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

160

180

200

220

120583(rarrNIC1)Aggr

120583(rarrNIC2)Aggr

120583(rarrNIC3)Aggr

120583(rarrNIC3)Aggr

Figure 5 Packet rate assigned to the virtual queues

Que

ue le

ngth

(pac

kets)

0

50

100

150

01064 01065 01066 01067 01068 01069 010701063Time (s)

S(rarrNIC1)Aggr

S(rarrNIC2)Aggr

S(rarrNIC3)Aggr

Figure 6 Lenght evolution of the virtual queues

During this second period the processor slices assigned tothe two queues119876(rarrNIC1)

Aggr and119876(rarrNIC3)Aggr are adjusted in such a

way that 119876(NIC)1 and 119876(NIC)

3 remain with comparable lengthsThe last period starts at the instant 010693 characterized

by the fact that the aggregated queue 119876(rarrNIC2)Aggr leaves the

empty state and therefore participates in the processor-shar-ing process Since the NIC queue 119876(NIC)

2 is low-loaded asshown in Figure 4 the largest slice is assigned to119876(rarrNIC2)

Aggr insuch a way that119876(NIC)

2 can reach the same length of the othertwo NIC queues as soon as possible

Now in order to show how the second step of the pro-posed strategy works we present the behavior of the inside-function queues whose output is sent to theNIC queue119876(NIC)

3

during the same short time interval considered so far Morespecifically Figures 7 and 8 show the length of the consideredinside-function queues and the processor slices assigned tothem respectively The behavior of the queue 11987623 is notshown because it is empty in the considered period Aswe canobserve from the above figures we can subdivide the intervalinto four periods

(i) The first period ranging in the interval [01063010657] is characterized by an empty state of thequeue 11987633 Thus in this period the processor sliceassigned to the aggregated queue 119876(rarrNIC3)

Aggr is sharedby 11987613 and 11987643 only

(ii) During the second period ranging in the interval[010657 01067] 11987613 is scarcely loaded (in particu-lar it is empty in the second part of this period) andso the processor slice assigned to 11987633 is increased

(iii) In the third period ranging between 01067 and010693 all the queues increase and equally share theprocessor

(iv) Finally in the last period starting at the instant010693 as already observed in Figure 5 the processorslice assigned to the aggregated queue 119876(rarrNIC3)

Aggr issuddenly decreased and consequently the slices as-signed to the queues1198761311987633 and11987643 are decreasedas well

The steady-state analysis is presented in Figures 9 10and 11 which show the mean delay in the inside-functionqueues in the NIC queues and the durations of the off-and on-states on the output links The values reported inall the figures have been derived as the mean values of theresults of many simulation experiments using Studentrsquos 119905-distribution and with a 95 confidence intervalThe numberof experiments carried out to evaluate each numerical resulthas been automatically decided by the simulation tool withthe requirement of achieving a confidence interval less than0001 of the estimated mean value To this purpose theconfidence interval is calculated at the end of each runand simulation is stopped only when the confidence intervalmatches the maximum error requirement The duration ofeach run has been chosen in such a way that the samplestandard deviation is so low that less than 30 runs are enoughto match the requirement

Journal of Electrical and Computer Engineering 7

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Queue11987613

Que

ue le

ngth

(pac

kets)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

5

10

15

20

25

30

35

40

45

50

(b) Queue11987633

Que

ue le

ngth

(pac

kets)

0

5

10

15

20

25

30

35

40

45

50

01064 01065 01066 01067 01068 01069 010701063Time (s)

(c) Queue11987643

Figure 7 Length evolution of the inside-function queues 1198761198983 for each119898 isin [1119872]

The figures compare the results achieved with the pro-posed strategy with the ones obtained with the two strategiesRR and QLWRR As said so far results have been obtainedagainst the burstiness and for two different values of theutilization coefficient that is 120588(119875)Low = 06 and 120588

(119875)

High = 09As expected the mean lengths of the processor queues

and the NIC queues increase with both the burstiness and theutilization coefficient

Instead we can note that themean length of the processorqueues is not affected by the applied policy In fact packetsrequiring the same function are enqueued in a unique queueand served with a rate 120583(119875)4 when RR or QLWRR strategiesare applied while when the NARR strategy is used they aresplit into 12 different queues and served with a rate 120583(119875)12 Ifin a classical queueing theory [28] the second case is worsethan the first one because of the presence of service ratewastes during the more probable periods of empty queuesthis is not the case here because the processor capacity of anempty queue is dynamically reassigned to the other queues

The advantage of the NARR strategy is evident in Fig-ure 10 where the mean delay in the NIC queues is repre-sented In fact we can observe that with only giving moreprocessor rate to the most loaded processor queues (with theQLWRR strategy) performance improvements are negligiblewhile applying the NARR strategy we are able to obtain adelay reduction of about 12 in the case of amore performantprocessor (120588(119875)Low = 06) reaching the 50 when the processorworks with a 120588(119875)High = 09 The performance gain achievedwith the NARR strategy increases with burstiness and theprocessor load conditions that are both likely In fact the firstcondition is due to the high burstiness of the Internet trafficthe second one is true as well because the processor shouldbe not overdimensioned for economic purposes otherwiseif overdimensioned it can be controlled with a processor ratemanagement policy like the one presented in [29] in order tosave energy

Finally Figure 11 shows the mean durations of the on-and off-periods on the node output links Only one curve is

8 Journal of Electrical and Computer Engineering

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(a) Processor rate 120583(119865Int)[13]

Proc

esso

r rat

e (kp

acke

tss)

0

20

40

60

80

100

120

140

01064 01065 01066 01067 01068 01069 010701063Time (s)

(b) Processor rate 120583(119865Int)[33]

Proc

esso

r rat

e (kp

acke

tss)

01064 01065 01066 01067 01068 01069 010701063Time (s)

0

20

40

60

80

100

120

140

(c) Processor rate 120583(119865Int)[43]

Figure 8 Processor rate assigned to the inside-function queues 1198761198983 for each119898 isin [1119872]

shown for each case because we have considered a utilizationcoefficient 120588(NIC)

= 05 on each NIC queue and therefore inthe considered case we have 119879ON = 119879OFF As expected themean on-off durations are higher when the processor rate ishigher (ie lower utilization coefficient) This is because inthis case the output processor rate is lower and thereforebatches of packets in the NIC queues are served quicklyThese results can be used to model the output traffic of eachSDNNFVnode as an Interrupted Poisson Process (IPP) [30]This model can be iteratively used to represent the inputtraffic of other nodes with the final target of realizing themodel of a whole SDNNFV network

5 Conclusions and Future Work

This paper addresses the problem of intranode resource allo-cation by introducing NARR a processor-sharing strategy

that leverages on the consideration that in any SDNNFVnode packets that have received the service of a VNFare enqueued to wait for transmission through one of theoutput NICs Therefore the idea at the base of NARR isto dynamically change the slices of the CPU assigned toeach VNF according to the state of the output NIC queuesgiving more CPU to serve packets that will leave the nodethrough the less-loaded NICs In this way wastes of the NICoutput link capacities are minimized and consequently theoverall delay experienced by packets traversing the nodes thatimplement NARR is reduced

SDNNFV nodes that implement NARR can coexist inthe same network with nodes that use other strategies sofacilitating a gradual introduction in the Future Internet

As a side contribution the simulator tool which is publicavailable on the web gives the on-off model of the outputlinks associated with each NIC as one of the results Thismodel can be used as a building block to realize a model for

Journal of Electrical and Computer Engineering 9M

ean

delay

in th

e pro

cess

or q

ueue

s (m

s)

NARRQLWRRRR

3 4 5 6 7 8 9 10 112Burstiness

0

05

1

15

2

25

3

120588(P)High = 09

120588(P)Low = 06

Figure 9 Mean delay in the function internal queues

NARRQLWRRRR

Mea

n de

lay in

the N

IC q

ueue

s (m

s)

3 4 5 6 7 8 9 10 112Burstiness

0

005

01

015

02

025

03

035

04

045

05

120588(P)High = 09

120588(P)Low = 06

Figure 10 Mean delay in the NIC queues

the design and performance evaluation of a whole SDNNFVnetwork

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

This work has been partially supported by the INPUT (In-Network Programmability for Next-Generation Personal

NARRQLWRRRR

0

20

40

60

80

100

120

140

160

180

Tim

e (120583

s)

3 4 5 6 7 8 9 10 112Burstiness

120588(P)High = 09

120588(P)Low = 06

Figure 11 Mean off- and on-states duration of the traffic on the out-put links

cloUd Service Support) project funded by the EuropeanCommission under the Horizon 2020 Programme (CallH2020-ICT-2014-1 Grant no 644672)

References

[1] White paper on ldquoSoftware-Defined Networking The NewNorm for Networksrdquo httpswwwopennetworkingorg

[2] M Yu L Jose and R Miao ldquoSoftware defined traffic measure-ment with OpenSketchrdquo in Proceedings of the Symposium onNetwork Systems Design and Implementation (NSDI rsquo13) vol 13pp 29ndash42 Lombard Ill USA April 2013

[3] H Kim and N Feamster ldquoImproving network managementwith software defined networkingrdquo IEEECommunicationsMag-azine vol 51 no 2 pp 114ndash119 2013

[4] S H Yeganeh A Tootoonchian and Y Ganjali ldquoOn scalabilityof software-defined networkingrdquo IEEE Communications Maga-zine vol 51 no 2 pp 136ndash141 2013

[5] B A A Nunes M Mendonca X-N Nguyen K Obraczkaand T Turletti ldquoA survey of software-defined networking pastpresent and future of programmable networksrdquo IEEE Commu-nications Surveys and Tutorials vol 16 no 3 pp 1617ndash16342014

[6] White paper on ldquoNetwork FunctionsVirtualisationrdquo httppor-taletsiorgNFVNFV White Paperpdf

[7] Network Functions Virtualisation (NFV) Network OperatorPerspectives on Industry Progress ETSI October 2013 httpportaletsiorgNFVNFV White Paper2pdf

[8] A Manzalini R Saracco E Zerbini et al ldquoSoftware-DefinedNetworks for Future Networks and Servicesrdquo White Paperbased on the IEEE Workshop SDN4FNS httpsitesieeeorgsdn4fnswhitepaper

[9] R Z Frantz R Corchuelo and J L Arjona ldquoAn efficientorchestration engine for the cloudrdquo in Proceedings of the IEEE3rd International Conference on Cloud Computing Technology

10 Journal of Electrical and Computer Engineering

and Science (CloudCom rsquo11) vol 2 pp 711ndash716 Athens GreeceDecember 2011

[10] K Bousselmi Z Brahmi and M M Gammoudi ldquoCloud ser-vices orchestration a comparative study of existing approachesrdquoin Proceedings of the 28th IEEE International Conference onAdvanced Information Networking and Applications Workshops(WAINA rsquo14) pp 410ndash416 Victoria Canada May 2014

[11] A Lombardo A Manzalini V Riccobene and G SchembraldquoAn analytical tool for performance evaluation of softwaredefined networking servicesrdquo in Proceedings of the IEEE Net-work Operations and Management Symposium (NMOS rsquo14) pp1ndash7 Krakow Poland May 2014

[12] A Lombardo A Manzalini G Schembra G Faraci C Ram-etta and V Riccobene ldquoAn open framework to enable NetFATE(network functions at the edge)rdquo in Proceedings of the IEEEWorkshop onManagement Issues in SDN SDI andNFV (Missionrsquo15) pp 1ndash6 London UK April 2015

[13] A Manzalini R Minerva F Callegati W Cerroni and ACampi ldquoClouds of virtual machines in edge networksrdquo IEEECommunications Magazine vol 51 no 7 pp 63ndash70 2013

[14] H Moens and F De Turck ldquoVNF-P a model for efficient place-ment of virtualized network functionsrdquo in Proceedings of the10th International Conference on Network and Service Manage-ment (CNSM rsquo14) pp 418ndash423 Rio de Janeiro Brazil November2014

[15] K Katsalis G S Paschos Y Viniotis and L Tassiulas ldquoCPUprovisioning algorithms for service differentiation in cloud-based environmentsrdquo IEEETransactions onNetwork and ServiceManagement vol 12 no 1 pp 61ndash74 2015

[16] D Tuncer M Charalambides G Pavlou and N WangldquoTowards decentralized and adaptive network resource man-agementrdquo in Proceedings of the 7th International Conference onNetwork and Service Management (CNSM rsquo11) pp 1ndash6 ParisFrance October 2011

[17] D Tuncer M Charalambides G Pavlou and N WangldquoDACoRM a coordinated decentralized and adaptive networkresource management schemerdquo in Proceedings of the IEEENetwork Operations and Management Symposium (NOMS rsquo12)pp 417ndash425 IEEE Maui Hawaii USA April 2012

[18] M Charalambides D Tuncer L Mamatas and G PavlouldquoEnergy-aware adaptive network resource managementrdquo inProceedings of the IFIPIEEE International Symposium on Inte-grated Network Management (IM rsquo13) pp 369ndash377 GhentBelgium May 2013

[19] CMastroianni MMeo and G Papuzzo ldquoProbabilistic consol-idation of virtual machines in self-organizing cloud data cen-tersrdquo IEEE Transactions on Cloud Computing vol 1 no 2 pp215ndash228 2013

[20] B Guan J Wu Y Wang and S U Khan ldquoCIVSched a com-munication-aware inter-VM scheduling technique for de-creased network latency between co-located VMsrdquo IEEE Trans-actions on Cloud Computing vol 2 no 3 pp 320ndash332 2014

[21] G Faraci and G Schembra ldquoAn analytical model to design andmanage a green SDNNFV CPE noderdquo IEEE Transactions onNetwork and Service Management vol 12 no 3 pp 435ndash4502015

[22] L Kleinrock ldquoTime-shared systems a theoretical treatmentrdquoJournal of the ACM vol 14 no 2 pp 242ndash261 1967

[23] S F Yashkov and A S Yashkova ldquoProcessor sharing a surveyof the mathematical theoryrdquo Automation and Remote Controlvol 68 no 9 pp 1662ndash1731 2007

[24] ETSI GS NFVmdashINF 001 v111 ldquoNetwork Functions Virtualiza-tion (NFV) Infrastructure Overviewrdquo 2015 httpswwwetsiorgdeliveretsi gsNFV-INF001 099001010101 60gs NFV-INF001v010101ppdf

[25] T N Subedi K K Nguyen and M Cheriet ldquoOpenFlow-basedin-network Layer-2 adaptive multipath aggregation in datacentersrdquo Computer Communications vol 61 pp 58ndash69 2015

[26] N McKeown T Anderson H Balakrishnan et al ldquoOpenFlowenabling innovation in campus networksrdquo ACM SIGCOMMComputer Communication Review vol 38 no 2 pp 69ndash742008

[27] NARR simulator v01 httpwwwdiitunictitartiToolsNARRSimulator v01zip

[28] L Kleinrock Queueing Systems Volume 1 Theory Wiley-Inter-science 1st edition 1975

[29] R Bruschi P Lago A Lombardo and G Schembra ldquoModelingpower management in networked devicesrdquo Computer Commu-nications vol 50 pp 95ndash109 2014

[30] W Fischer and K S Meier-Hellstern ldquoThe Markov-ModulatedPoisson process (MMPP) cookbookrdquo Performance Evaluationvol 18 no 2 pp 149ndash171 1993

Page 11: Design of High Throughput and Cost- Efficient Data Center
Page 12: Design of High Throughput and Cost- Efficient Data Center
Page 13: Design of High Throughput and Cost- Efficient Data Center
Page 14: Design of High Throughput and Cost- Efficient Data Center
Page 15: Design of High Throughput and Cost- Efficient Data Center
Page 16: Design of High Throughput and Cost- Efficient Data Center
Page 17: Design of High Throughput and Cost- Efficient Data Center
Page 18: Design of High Throughput and Cost- Efficient Data Center
Page 19: Design of High Throughput and Cost- Efficient Data Center
Page 20: Design of High Throughput and Cost- Efficient Data Center
Page 21: Design of High Throughput and Cost- Efficient Data Center
Page 22: Design of High Throughput and Cost- Efficient Data Center
Page 23: Design of High Throughput and Cost- Efficient Data Center
Page 24: Design of High Throughput and Cost- Efficient Data Center
Page 25: Design of High Throughput and Cost- Efficient Data Center
Page 26: Design of High Throughput and Cost- Efficient Data Center
Page 27: Design of High Throughput and Cost- Efficient Data Center
Page 28: Design of High Throughput and Cost- Efficient Data Center
Page 29: Design of High Throughput and Cost- Efficient Data Center
Page 30: Design of High Throughput and Cost- Efficient Data Center
Page 31: Design of High Throughput and Cost- Efficient Data Center
Page 32: Design of High Throughput and Cost- Efficient Data Center
Page 33: Design of High Throughput and Cost- Efficient Data Center
Page 34: Design of High Throughput and Cost- Efficient Data Center
Page 35: Design of High Throughput and Cost- Efficient Data Center
Page 36: Design of High Throughput and Cost- Efficient Data Center
Page 37: Design of High Throughput and Cost- Efficient Data Center
Page 38: Design of High Throughput and Cost- Efficient Data Center
Page 39: Design of High Throughput and Cost- Efficient Data Center
Page 40: Design of High Throughput and Cost- Efficient Data Center
Page 41: Design of High Throughput and Cost- Efficient Data Center
Page 42: Design of High Throughput and Cost- Efficient Data Center
Page 43: Design of High Throughput and Cost- Efficient Data Center
Page 44: Design of High Throughput and Cost- Efficient Data Center
Page 45: Design of High Throughput and Cost- Efficient Data Center
Page 46: Design of High Throughput and Cost- Efficient Data Center
Page 47: Design of High Throughput and Cost- Efficient Data Center
Page 48: Design of High Throughput and Cost- Efficient Data Center
Page 49: Design of High Throughput and Cost- Efficient Data Center
Page 50: Design of High Throughput and Cost- Efficient Data Center
Page 51: Design of High Throughput and Cost- Efficient Data Center
Page 52: Design of High Throughput and Cost- Efficient Data Center
Page 53: Design of High Throughput and Cost- Efficient Data Center
Page 54: Design of High Throughput and Cost- Efficient Data Center
Page 55: Design of High Throughput and Cost- Efficient Data Center