19
1 IoT Traffic Management and Integration in the QoS Supported Network Basim K. J. Al-Shammari * , Student Member, IEEE,, N. A. Al-Aboody, Student Member, IEEE,, and H. S. Al-Raweshidy, Senior Member, IEEE, Department of Electronic and Computer Engineering, Brunel University London, UK. * [email protected] Abstract—This paper proposes: (i) a traffic flow management policy, which allocates and organises Machine Type Commu- nication (MTC) traffic flows network resources sharing within Evolved Packet System (EPS), (ii) an access element as a Wireless Sensor Network (WSN) gateway for providing an overlaying access channel between the Machine Type Devices (MTDs) and EPS and (iii) it addresses the effect and interaction in the heterogeneity of applications, services and terminal devices and the related Quality of Service (QoS) issues among them. This work overcomes the problems of network resource starvation by preventing deterioration of network performance. The scheme is validated through simulation, which indicates the proposed traffic flow management policy outperforms the current traffic management policy. Specifically, simulation results show that the proposed model achieves an enhancement in QoS performance for the MTC traffic flows, including a decrease of 99.45% in Packet Loss Ratio (PLR), a decrease of 99.89% in packet End To End (E2E) delay, a decrease of 99.21% in Packet Delay Variation (PDV). Furthermore, it retains the perceived Quality of Experience (QoE) of the real time application users within high satisfaction levels, such as the VoLTE service possessing a Mean Opinion Score (MOS) of 4.349 and enhancing the QoS of a video conference service within the standardised values of a 3GPP body, with a decrease of 85.28% in PLR, a decrease of 85% in packet E2E delay and a decrease of 88.5% in PDV. Index Terms—IoT, MTC, HTC, Traffic Policy, Gateway, QCI, NQoS, AQoS, E2E delay, Jitter. I. I NTRODUCTION AND RELATED WORK MTC (or Machine to Machine (M2M) communication) introduces the concept of enabling networked devices (wireless and/or wired), electronic devices, dubbed MTC devices and/or communication from MTC devices to a central MTC server or a set of MTC servers, for exchanging data or control without human interference. This includes the mechanisms, algorithms and the technologies used to enable MTC communication. Which involves connecting large number of devices or piece of software with the creation of the Internet of Things (IoT). MTC developments are driven by the unlimited potential of IoT everyday applications, as a result from the connected objects to grow more intelligent. These applications presented in the form of smart grids, healthcare systems, smart cities, building automation, and many other. Such systems can be implemented through the use of Wireless Sensor Network (WSN), within which the utilisation of cooperating devices can achieve intelligent monitoring and management. List of Abbreviations 3G, 4G, 5G 3 rd ,4 th ,5 th Generation AC Admission Control ARP Allocation and Retention Priority AQoS Application QoS BS Base Station CoAP Constrained Application Protocol DL Downlink DSCP Differentiated Services Code Point E2E End To End eNB evolved NodeB EPS Evolved Packet System UE User Equipment FTP File Transfer Protocol GBR Guaranteed Bit Rate H2H Human-to-Human HTC Human Type Communication HTD Human Type Device IoT Internet of Things IP Internet Protocol KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low Power WAN LTE Long Term Evolution LTE-A LTE Advanced LTE-APro LTE-A Proximity M2M Machine to Machine MAC Medium Access Control MBR Maximum Bit Rate MC Mission Critical MCN Mobile Cellular Networks MOS Mean Opinion Score MTC Machine Type Communication MTD Machine Type Devices NAS Non Access Stratum NB-IoT Narrow Band IoT NGBR Non- Guaranteed Bit Rate NGN Next Generation Network NQoS Network QoS PDB Packet Delay Budget PDCCH Physical Downlink Control CHannel PDV Packet Delay Variation PDN Packet Data Network PCEF Policy Control Enforcement Function PCC Policy Charging and Control PLR Packet Loss Ratio PRACH Physical Random Access CHannel PS Packet Switching PUSCH Physical Uplink Shared CHannel QoE Quality of Experience QoS Quality of Service QCI QoS Class Identifier RAN Radio Access Network SDF Service Data Flow SN Sensor Node TFT Traffic Flow Template TTI Transmission Time Interval UL Uplink VoLTE Voice over LTE WAN Wide Area Network WSN Wireless Sensor Network

IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

Embed Size (px)

Citation preview

Page 1: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

1

IoT Traffic Management and Integration in the QoSSupported Network

Basim K. J. Al-Shammari∗, Student Member, IEEE,, N. A. Al-Aboody, Student Member, IEEE,, and H. S.Al-Raweshidy, Senior Member, IEEE,

Department of Electronic and Computer Engineering,Brunel University London, UK.

[email protected]

Abstract—This paper proposes: (i) a traffic flow managementpolicy, which allocates and organises Machine Type Commu-nication (MTC) traffic flows network resources sharing withinEvolved Packet System (EPS), (ii) an access element as a WirelessSensor Network (WSN) gateway for providing an overlayingaccess channel between the Machine Type Devices (MTDs) andEPS and (iii) it addresses the effect and interaction in theheterogeneity of applications, services and terminal devices andthe related Quality of Service (QoS) issues among them. Thiswork overcomes the problems of network resource starvation bypreventing deterioration of network performance. The schemeis validated through simulation, which indicates the proposedtraffic flow management policy outperforms the current trafficmanagement policy. Specifically, simulation results show that theproposed model achieves an enhancement in QoS performancefor the MTC traffic flows, including a decrease of 99.45% inPacket Loss Ratio (PLR), a decrease of 99.89% in packet EndTo End (E2E) delay, a decrease of 99.21% in Packet DelayVariation (PDV). Furthermore, it retains the perceived Qualityof Experience (QoE) of the real time application users withinhigh satisfaction levels, such as the VoLTE service possessing aMean Opinion Score (MOS) of 4.349 and enhancing the QoS ofa video conference service within the standardised values of a3GPP body, with a decrease of 85.28% in PLR, a decrease of85% in packet E2E delay and a decrease of 88.5% in PDV.

Index Terms—IoT, MTC, HTC, Traffic Policy, Gateway, QCI,NQoS, AQoS, E2E delay, Jitter.

I. INTRODUCTION AND RELATED WORK

MTC (or Machine to Machine (M2M) communication)introduces the concept of enabling networked devices (wirelessand/or wired), electronic devices, dubbed MTC devices and/orcommunication from MTC devices to a central MTC server ora set of MTC servers, for exchanging data or control withouthuman interference. This includes the mechanisms, algorithmsand the technologies used to enable MTC communication.Which involves connecting large number of devices or pieceof software with the creation of the Internet of Things (IoT).MTC developments are driven by the unlimited potential ofIoT everyday applications, as a result from the connectedobjects to grow more intelligent. These applications presentedin the form of smart grids, healthcare systems, smart cities,building automation, and many other. Such systems can beimplemented through the use of Wireless Sensor Network(WSN), within which the utilisation of cooperating devicescan achieve intelligent monitoring and management.

List of Abbreviations

3G, 4G, 5G 3r d , 4th , 5th GenerationAC Admission ControlARP Allocation and Retention PriorityAQoS Application QoSBS Base StationCoAP Constrained Application ProtocolDL DownlinkDSCP Differentiated Services Code PointE2E End To EndeNB evolved NodeBEPS Evolved Packet SystemUE User EquipmentFTP File Transfer ProtocolGBR Guaranteed Bit RateH2H Human-to-HumanHTC Human Type CommunicationHTD Human Type DeviceIoT Internet of ThingsIP Internet ProtocolKPI Key Performance IndicatorKQIs Key Quality IndicatorsLPWAN Low Power WANLTE Long Term EvolutionLTE-A LTE AdvancedLTE-APro LTE-A ProximityM2M Machine to MachineMAC Medium Access ControlMBR Maximum Bit RateMC Mission CriticalMCN Mobile Cellular NetworksMOS Mean Opinion ScoreMTC Machine Type CommunicationMTD Machine Type DevicesNAS Non Access StratumNB-IoT Narrow Band IoTNGBR Non- Guaranteed Bit RateNGN Next Generation NetworkNQoS Network QoSPDB Packet Delay BudgetPDCCH Physical Downlink Control CHannelPDV Packet Delay VariationPDN Packet Data NetworkPCEF Policy Control Enforcement FunctionPCC Policy Charging and ControlPLR Packet Loss RatioPRACH Physical Random Access CHannelPS Packet SwitchingPUSCH Physical Uplink Shared CHannelQoE Quality of ExperienceQoS Quality of ServiceQCI QoS Class IdentifierRAN Radio Access NetworkSDF Service Data FlowSN Sensor NodeTFT Traffic Flow TemplateTTI Transmission Time IntervalUL UplinkVoLTE Voice over LTEWAN Wide Area NetworkWSN Wireless Sensor Network

Page 2: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

2

IoT presents a massive ecosystem of new elements, applica-tions, functions and services. Existing transmits of MTC trafficflows for IoT devices run over wireless Wide Area Network(WAN), 3G and 4G networks. However, 5G is expected tobecome the main platform for meeting the requirements ofIoT traffic demand in the very near future. EPS protocols andarchitectures were aimed and enhanced to handle Human TypeCommunication (HTC) traffic flows. With the anticipated massdeployment of MTDs, there are concerns from the network op-erators side, the service provider side and from the applicationconsumers perspective. The fear is that the network ecosystemmay become exhausted and/or the available network resourceswill not be able to meet Application QoS (AQoS) traffic flowrequirements for both HTC and MTC traffic flows simulta-neously. Moreover, MTC traffic characteristics of IoT devicesare different from those of HTC and can augment the risk ofnetwork performance deterioration.

Future networks will gradually be converted into integratedentities with high broadband and large-scale traffic due tothe interconnection of social medias, virtual systems andphysical networks [1]. It is estimated that between 50 to100 billion MTC devices will be connected to the internetby 2020 [2], [3] and that the majority of these deviceswill be run through cellular wireless networks and WSNs.However, cellular networks were mainly designed for Human-to-Human (H2H) communication, where the Uplink (UL)resource allocation is less than the Downlink (DL). In contrast,the M2M traffic could produce more data in the UL channels,which can lead to a deterioration of the human QoE or couldoverload the network. To overcome this challenge, an efficientresource allocation model can be used to take full advantageof the opportunities created by a global MTC market overcellular networks. The 3GPP and the Institute of Electrical andElectronics Engineering (IEEE) standardisation bodies haveinitiated their working groups for facilitating such applicationsthrough various releases of their standards [4].

Long Term Evolution Advanced (LTE-A) is very closeto reach the technologically possible efficiency limits in thecurrently available bands, according to 3GPP, hence, it isexpected that LTE will remain as the baseline technology forwide area broadband coverage also in the 5G era. However,integrating M2M communication with LTE-A can have adramatic negative impact on LTE network performance interms of QoS and throughput [5]. Consequently, 3GPP willcontinue working on enhancing LTE to make it more suitablefor M2M communication from a radio perspective as well asfrom a service delivery one and hence, mobile M2M trafficmust not be neglected.

Costantino et al. [6] evaluated the performance of an LTEgateway using the Constrained Application Protocol (CoAP),with the M2M traffic patterns and the network configurationsbeing presented through simulations. The authors argued thatthe traffic patterns highly depending on the application used inthe terminal devices , however, they did not describe or justifytheir choices. Their work focused on the effect of signalsinterference, in radio access node, related to the number ofconnected MTDs.

Lo et al. [7] studied the impact of data aggregation for M2M

on QoS in a LTE-A network. In their work, they utilised arelay node as an aggregation and entry point of MTD trafficto improve LTE-A UL efficiency. Specifically, their work wasbased on tunnelling and aggregation of the MTDs, which wererelated to a specific destination server or node, via the relaynode tunnel entry point and by considering the priority of theclass of service. Whilst a significant reduction in signallingoverhead was achieved, there was an increase in the delay ofthe received packet at the destination point and an increase inthe number of aggregated MTDs at the entry point.

There have been several attempts at delivering M2M com-munication integration with LTE-A. The current state of stan-dardisation efforts in M2M communication and the potentialM2M problems, including physical layer transmissions werediscussed in [8], who proposed a solution to provide QoSguarantees by facilitating M2M applications with hard timingconstraints. Moreover, the features of M2M communication in3GPP, an overview of the network architecture, random accessprocedure and radio resources allocations were provided. Theauthors in [9] considered one service and proposed a coop-erative strategy for energy efficient MTD video transmissionwith a good Network QoS (NQoS). They achieved promisingresults, however, in the future, networks with different serviceand network components will be used in a heterogeneousenvironment. A relay node was proposed by the authors in [10]to facilitate the integration of M2M, with the proposed workbeing focused on a relaying feature in LTE-A. In [11], theauthors examined the network convergence between MobileCellular Networks (MCN) and WSNs, proposing that the mo-bile terminals in MCN should act as both Sensor Nodes (SNs)and gateways for WSN in the converged networks. Their workneither guaranteed the E2E connectivity nor specified RadioAccess Networks (RANs) technology involved. There has beenno work, as yet, to the best of our knowledge, proposing anentirely heterogeneous model that serves different networkswith different services.

Another strand of research considered the radio accessinterface, including the new Low Power Wide Area Networks(LPWANs), such as Narrow Band IoT (NB-IoT) in the licensedspectrum as well as its competitors LoRaTM , SigFoxTM andIngenuTM in the unlicensed spectrum [12]. The unlicensednetworks have been deployed around the world since the firstquarter of 2016, which motivated 3GPP to introduce NB-IoTin the second quarter of 2016. NB-IoT uses the guard bands inthe 3G and 4G licensed frequency spectrum bands to providea dedicated radio access for IoT devices within the radiospectrum of the currently deployed 3G and 4G RANs [13].In the same context, the authors in [14] [15] [16] presenteda connection-less integration of low rate IoT traffic withinthe Physical Random Access Channel (PRACH) frequency-time slot of LTE access technology. The previously listedtechnologies and research attempts did not address the effect ofthe heterogeneity of MTC application traffic and related QoSperformance of HTC application when the network ecosystemresources have to be shared in such a way. The paper’s maincontributions are:

1) Proposing an access WSN gateway to provide access andconnectivity for MTD traffic within an LTE-A Proximity

Page 3: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

3

(LTE-APro) eNB licence spectrum radio interface.2) Proposing a traffic flow management policy to define the

existence of IoT traffic with H2H traffic over an EPSnetwork and provide an acceptable level of NQoS forMTC traffic flows, whilst preserving the QoE of HTCs.

3) Introducing specific QoS Class Identifiers (QCIs), whichare elected to classify the MTC traffic flows, whilsttaking into account the heterogeneity of SNs servicesand applications, over an EPS network, in order todiscriminate the treatment of EPS from the NQoS pointof view from the standard QCIs of HTC traffic flows.

The rest of this paper is organised as follows: the nextsection presents a background to MTC traffic characteristicsand QoS in EPS provision and management. Section IIIpresents the design of the proposed traffic flow managementmodel. Section IV introduces the simulation setup and presentsthe main results. A performance discussion is presented insection V. Finally, section VI concludes the paper and alsoputs forward suggestions for future work.

II. BACKGROUND

A. MTC traffic characteristics

MTC terminals or SN can communicate with one or manyservers and their traffic characteristics have been discussedin detail in [17]. They can be considered as data centriccommunication for sending constant size packets toward server[18], [19]. Their traffic main characteristics are as follows:• Service Data Flow (SDF) of MTC traffic is typically from

the SN towards Packet Data Network (PDN), which willbe more dominant in the UL direction. Whereas HTC’sSDF traffic depends on the service type and its AQoSdemands during the call session or service request.

• In much of MTC applications traffic, the SN harvests asmall amount of information. This can be transferred at aspecific data rate, which matches the applications needs.

• In the Next Generation Network (NGN), the MTC traffichas a share of the network resources and they will benot limited just for HTC. The MTC are going to join theuse of licensed spectrum and take a share of the mobilebroadband network access resources.

• In general, MTC traffic is delay tolerant and for themost part, it can be considered as having low priority.3GPP in release 10 introduces Delay Tolerant Access inthe Random Access Procedure as part of Control PlaneSignalling to provide a lower priority in the PRACH forMTC traffic than HTC Traffic [16]. However, some MTCtraffic is excluded from using this procedure, such asindustrial control systems, fire alarms and live monitoring,as they are need to be transmitted with high priority.LTE-A release 13 introduces new QCI to provide a highpriority SDF for emergency service application use cases[20], [21].

• SN data transmission is irregular and depends on manyfactors, which relate to the WSN type and deploymentpattern. This irregularity in traffic generation can producehigh traffic peaks, which could lead to starvation in thenetwork resources, specifically in the wireless access part.

The basic concern in MTC within the NGN is how toprovide the required NQoS levels or Key Performance Indica-tors (KPIs), which will meet the Application QoS (AQoS) ofMTC traffic or Key Quality Indicators (KQIs), with a massivedeployment of SNs, while guaranteeing high levels of expectedand perceived quality or QoE of real time application users,simultaneously.

In order to integrate the IoT traffic in the LTE-A standardEPS resources need to be used. Consequently, it is necessaryto provide a specific NQoS for MTCs applications traffic soas to avoid the substantial impact on the performance of HTCtraffic, such as voice, video and file transfer and to keep theQoE at a satisfaction level.

B. QoS in EPS Provision and Management

Applications traffic, including that of HTC and MTC inPacket Switching (PS) networks has specific AQoS charac-teristics and demands. They require specific levels of NQoSparameters, which can be described in terms of packet lossrate, packet one-way E2E delay, packet jitter at the endpoint and link throughput at the IP layer [22], [23]. Packetjitter refers to the variation in the time change between twoconsecutive packets leaving the source node with time stampst1 and t2; they are played back at the destination node at t3and t4, with the jitter being calculated as [24]:

Jitter = (t4 − t3) − (t2 − t1) (1)

In EPS, these diverse NQoS quantitative performance pa-rameters are provisioned via a standardised and effectivemechanism, which is a class based EPS QoS concept [25].This mechanism allows service providers to deliver servicedifferentiation via NQoS control in the access and backhaulnetworks, thereby solving the issue of over-provisioning of thevaluable network resources.

NQoS control mechanism based service differentiation isperformed on the EPS bearer level, with this bearer beinga logical unidirectional connection between PDN-GW andUser Equipment (UE) as well as it being the core elementof NQoS control treatment in EPS. The EPS bearer type andlevel define the SDFs QoS requirements among all the EPSnodes. EPS introduces two specified bearers: default bearerand dedicated bearer. The former is initiated when the UEattaches to the network and remains activated until it detaches.The resource type of the default bearer is Non- GuaranteedBit Rate (NGBR) and it supports the best effort QoS level[20]. While the dedicated bearer is created dynamically ondemand when the terminal device requests a service or accessto services that require a higher QoS level than the defaultbearer, such as conversational voice or conversational video.The QoS of the EPS bearer is controlled by using a set ofQoS’s EPS parameters including:

1) Resource Types: define the bandwidth allocation forthe bearer, there being two types, Guaranteed Bit Rate(GBR) bearer and NGBR. The GBR EPS bearer hasa guaranteed minimum bandwidth (bit rate), while theNGBR EPS bearers bandwidth is not assured . NGBREPS bearer resource allocation depends only on the

Page 4: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

4

available resources remaining after the completion ofthe GBR EPS bearer resource assignment process.

2) QCI: The QoS level of the bearer is assigned to oneand only one QCI, which is defined in (3GPP PCCarchitecture) as a reference between UE and the Pol-icy Control Enforcement Function (PCEF), to indicatedifferent IP packet QoS characteristics. It characterisesthe SDF in the UL and DL directions by resource type,packet priority handling, Packet Loss Rate (PLR) andPacket Delay Budget (PDB). The QCI comes from themapping process of the DSCP values that are specifiedin the IP packet header. The nodes within EPS dependon the QCI value to translate the demand for AQoS. ThisAQoS to NQoS mapping is to ensure that the packet flowcarried on a specific bearer receives the minimum levelof QoS in multiuser multiservice networks [20], [25]. InLTE the QCI is an integer value that varies from 0 to255, the octet number 3 in EPS QoS information element[20]. While LTE Release 8 specifies nine standard QCIvalues varying from 1 to 9 [26], LTE-A Release 13standardised thirteen different QoS levels, with four newQCI values having been added to be used in MissionCritical (MC) services for emergency services commu-nication, as shown in Table I [20]. QCI value assignmentis performed from the network side upon request froman application running on the terminal device.

3) Allocation and Retention Priority (ARP): used forcall Admission Control (AC), which is an integer valueranging from 1 (highest priority) to 15 (lowest priority)[27]. The ARP value, from the control plane perspective,sets the priority level and treatment for the EPS bearer. Itpermits the EPS control plane to exercise differentiationregarding control of the bearers setting up and retention.Furthermore, it contains information about bearer pre-emption capability and bearer pre-emption vulnerability.The level of importance of resource request is defined byARP and used for AC. Table II lists examples of ARPand related QCI values [28]. Whilst data plane trafficpackets handling, among the EPS nodes and links, iscarried by an associated bearer, as identified by QCI.

4) GBR and Maximum Bit Rate (MBR): parameters areused for a GBR dedicated EPS bearer. GBR specifiesthe minimum provision and assured bit rate, whileMBR sets the limit of the maximum bit rate allowedwithin the network. If the dedicated EPS bearer reachesthe specified MBR, any new arriving packets will bediscarded [27].

5) Access Point Name - Available MBR (APN-AMBR):parameter is applied at UE for UL traffic only and itis utilised at PDN-GW for both DL and UL traffic. Itcontrols the total bandwidth of all NGBR EPS bearers[27].

6) UE - AMBR (UE-AMBR): is applied by eNB only,which specifies the maximum allowable bandwidth forall NGBR EPS bearers in the UL and DL directions[27].

III. PROPOSED MODEL

A. Integration of MTC traffic into EPS and traffic flow man-agement policy

In order to integrate the IoT traffic in the LTE-A standard,EPS resources need to be used. Consequently, to avoid thesubstantial impact on the performance of HTC traffic, suchas voice, video, HTTP service and FTP service, a specificNQoS for MTCs applications traffic flows needs to be providedto keep the QoE of real time application users within asatisfaction level. A policy for traffic flow management isproposed in this paper for defining MTC traffic flows over theEPS network, that differentiates the treatment of EPS nodesand links from the NQoS point of view for the different MTCtraffic flows and separates the standard HTC traffic flows fromMTC ones. The need for a new policy emerged from thedrawbacks of the current traffic flow management policy, onewhich does not differentiate between MTC and HTC trafficflows from a network resources sharing point of view.

Lemma: the MTC traffic flows and HTC traffic flowscompete to take shares of the available network resources.

Proof : Let us consider the total available network resourcesrelated to the network capacity is R, which should be assignedto traffic flows that are transported through the network in eachTransmission Time Interval (TTI). If the allocated networkresources per traffic flow j are rj and the total number oftraffic flows is k, then the total allocation can be expressedas:

Σkj=1(rj) ≤ R (2)

where, R = (0,C] and C is the maximum capacity of thenetwork from a network resources point of view. The networkresource rj can be assigned to the MTC or HTC traffic flow.If an MTC traffic flow of a single MTD i has a share ofnetwork resources in each TTI of IoTi , then the number ofthe network resources assigned to all MTC traffic flows thatcome from MTDs can be expressed as:

Σmi=1(IoTi) ≤ S (3)

where, m is the maximum number of the handled MTCtraffic flows and S = (0, R]. Under the same consideration, theHTC traffic flow of a single Human Type Device (HTD) n hasa share of network resources in each TTI of Hn, therefore thenumber of network resources assigned to HTC traffic flowscome from HTDs can be expressed as:

Σln=1(Hn) ≤ T (4)

where, l is the maximum number of the handled HTC trafficflows and T = (0, R]. From equation 1, 2 and 3 we can deducethat:

k = m + l (5)

and

R = S + T (6)

and

Page 5: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

5

TABLE I: Standardised QCI characteristics

QCI Resource Type Priority PDB msec PLR Service

0, 255 Reserved

1 GBR 2 100 10−2 Conversational Voice

2 GBR 4 150 10−3 Conversational Video

3 GBR 3 50 10−3 Real Time Gaming

4 GBR 5 300 10−6 Non-Conversational Video (Buffered Streaming)

65 GBR 0.7 75 10−2 Mission Critical user plane Push To Talk voice (MCPTT)

66 NGBR 2 100 10−2 Non-Mission-Critical user plane Push To Talk voice

5 NGBR 1 100 10−6 IMS Signalling

6 NGBR 6 300 10−6 Video (Buffered Streaming) TCP-Based

7 NGBR 7 100 10−3 Voice, Video (Live Streaming), Interactive Gaming

8 NGBR 8 300 10−6 Video (Buffered Streaming) TCP-Based

9 NGBR 9 300 10−6 Video (Buffered Streaming) TCP-Based, default bearer

69 NGBR 0.5 60 10−6 Mission Critical delay sensitive signalling

70 NGBR 5.5 200 10−6 Mission Critical Data

10-64, 67-68, 71-127 Spare

128-254 Operator Specific

TABLE II: An example of QCI to ARP mapping

*ARP *QCIPre-emption Pre-emption Resource Example Services Typical Associated

Index Bearer Index Packet

Priority Priority capability vulnerability Type Protocols

1 1 1 2 Yes No GBR Conversational Voice UDP, SIP, VoIP

2 2 2 4 Yes No GBR Conversational Video (live streaming) UDP, RTSP

3 3 3 3 Yes No GBR Real Time Gaming UDP, RTP

4 4 4 5 Yes No GBR Conversational Video (hi-definition) UDP, RTSP

5 5 5 1 Yes Yes NGBR IMS signalling TCP, RTP

6 5 6 6 Yes Yes NGBR Video (buffered streaming), TCP applications TCP, FTP

7 6 7 7 Yes Yes NGBR Voice, Video Live Streaming, Interactive Gaming TCP, HTTP, VoIP

8 6 8 8 Yes Yes NGBR Video (buffered streaming), TCP applications TCP, SMTP, POP

9 8 9 9 Yes Yes NGBR Video (buffered streaming), TCP applications TCP, FTP, IMAP

10 15 5 9 No Yes NGBR UDP based applications UDP, SNMP

11 14 6 9 No Yes NGBR UDP based applications UDP, SNMP, POP

12 13 7 9 No Yes NGBR UDP based applications UDP, RTP

13 11 8 9 No Yes NGBR UDP based applications UDP

14 9 9 9 No Yes NGBR UDP based applications UDP, RTP

15 1 1 2 Yes No GBR UDP based applications UDP,SIP, VoIP

* ARP and QCI values from 1-9 are standardised in LTE R8 the rest are left for network operator setting.

Σkj=1(rj) = Σ

mi=1(IoTi) + Σln=1(Hn) ≤ R (7)

where, the lower and upper boundaries of IoTi and Hn

in Eq. 7 are (0, R], and the network resources allocation inthe Evolved-Terrestrial RAN (E-UTRAN) network part andEvolved Packet Core (EPC) is mainly based on the bearer’sARP and QCI values as well as the associated NQoS values,which are mapped from the AQoS’s DSCP value. As expectedin the NGN, the MTC/IoT traffic flows will potentially have ashare with HTC traffic flows , therefore the network resources

could be assigned to the HTC traffic flows if their bearers arehigher (and/or) they have network supported ARPs and QCIscomparative to MTC traffic flows bearers, and vice versa. Ifthe MTCs’ traffic flow bearers get a resource allocation higherthan those of HTC, without determining the management ofnetwork resources sharing among them, this will produce someserious issues regarding the offered NQoS and users’ QoE.This deterioration in network performance will be clear inNGBR HTC applications, such as FTP and HTTP services.

A proposed solution for the issue of NQoS deteriorationis presented here by isolating the available network resources

Page 6: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

6

(R), which takes the form of adaptive shared (R) networkresources between the MTC network resources share (S) andHTC network resources share (T ), by means of two disjointsets. The size of the two shares (S and T ) is a percentageof available (R), which can be adaptively adjusted based ona dynamic change in the network resources loading share, ac-cording to traffic flows conditions. Furthermore, the percentagecan be customarily set on a static basis for supporting HTCtraffic flows over MTC traffic flows and to reduce the impact ofincreased network signalling overhead between the E-UTRANnetwork part and EPC. Therefore, Eq.6 can be rewritten as:

R = S + T (8)

S = βR (9)

T = αR (10)

β + α = 1 (11)

where, α is the percentage of network resources shareallocated/assigned to the HTC traffic flows and the remainingnetwork resources percentage is β. For adaptive and dynamicoperation, α and β are set as a function of AQoS demands toensure the provision of NQoS in terms of network resourcesfor all traffic flows in the network nodes and transmissionlinks. The EPS system primarily depends on the bearerssettings to provide the required NQoS metrics and to coverthe needs of AQoS represented by the terminal type andDSCP. From Eq.11, α’s and/or β’s can be used as tuningparameters to provide the required percentage value of networkresources share, which are the bearer’s setting, binding andmodification parameters. In the CN and precisely in the Policyand Charging Rule Function (PCRF), PCEF and MobilityManagement Entity (MME) functions units, these parametersas NQoS are Resource Type, ARP, QCI, PDB and PLR.Then, it is possible to demonstrate α and β as a functionof AQoS and/or NQoS. Furthermore, they can be a functionof the aforementioned AQoS needs and NQoS provided (fromnetwork side) to cover/meet the required performance and tooptimise network resources utilisation as:

α&β = f (AQoS, NQoS)

= f (TerminalT ype,DSCP, ResourceT ype, ARP,QCI,

PDB, PLR)(12)

where, (TerminalT ype,DSCP) represent the AQoS require-ments and (ResourceT ype, ARP,QCI, PDB, PLR) representthe NQoS offered.

The dynamic variations of α and β in every TTI can bedeployed in EPC individually to set traffic policy managementinto a flavour of terminal device type user (Human or Machine)and the required network resources for the needs of theusers’ traffic flows, according to Eq. 9 and Eq. 10. Trafficmanagement policy is realised via an active Policy Charging

and Control (PCC) rule setting or can be applied accordingto the local policy configuration in the network gateways orfrom the PCRF network core elements.

The aforementioned PCC rule setting is presented as anadaptive algorithm conferring to the network traffic flows.Another methodology can be interpreted from the previousanalysis, for the setting of control mechanism of traffic flowmanagement policy. It consists of two components representedby α and β. Each one can work with a distinct approachregarding the network resources allocation, separately withinits allowable limit, such as optimisation and scheduling ofnetwork resources usage over different parts of the network.This approach sets a traffic flow management policy bydefining a fixed network resource share between HTC andMTC traffic flows. Eq. 12 can be rewritten to extract and setthe related AQoS and NQoS metrics as follows:

f −1(α) = (HTCTerminal,DSCP, ARP,QCI, PDB, PLR)(13)

f −1(β) = (MTCTerminal,DSCP, ARP,QCI, PDB, PLR)(14)

Fig.1 shows the flowchart for setting up the traffic flowmanagement policy in each TTI, or it can be built on a specificduty cycle time basis, whereby the procedure is matched withthe TTI of EPS to be mixed with the EPS resource allocationpolicy. The flowchart starts by checking the available networkresources and then checks for traffic flow requests of the HTCand MTC terminals. In Fig.1a the dynamic adaptive networkresource share policy has been applied, whereas in Fig.1ba fixed network resource share policy has been applied ona specific HTC terminals network resource share of α%. Aspecific sort of function is used to match AQoS needs withthe NQoS available and to serve the GBR bearers as well asthe NGBR bearers for HTC terminal and MTC terminal trafficflows, simultaneously, based on the T and S shares.

An estimate and synthetic values set is listed in Table III,for α and β, according to the traffic density pattern from theproductive realistic network scenarios proposed in [18], [29].

TABLE III: Proposed estimated values of α, β and associatednetwork resources share

Function Percentage Share of Resource Type

over whole Network Percentage (sub share)

Resources from each main Share

α 80% 67% GBR

33% NGBR

β 20% 60% GBR

40% NGBR

There are three main types of MTC traffic, real time, delaysensitive and delay tolerant. The proposed QCIs aimed at thesethree types, as a base line in this work, could be extended toprovide precise differentiation if needed. The index values of

Page 7: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

7

Start

Check available network resources (R)

Check HTC & MTC traffic flow requests (T) & (S)

Set HTC & MTC resources shares (T = α R) & (S = β R)

Apply resources share policy & Set (α, β=α-1 & R = R)

Serve all HTC requested GBR bearers based on the allocated

share (T)

Serve all MTC requested GBR bearers based on the allocated

share (S)

Serve all HTC requested NGBR bearers based on the residual resources from the share (T)

Serve all MTC requested NGBR bearers based on the residual resources from the share (S)

Set HTC traffic flows NQoS regarding HTC AQoS, T and α

Set MTC traffic flows NQoS regarding MTC AQoS, S and β

Prepare for next assignment

0< MTC Share ≤SS< HTC Share ≤ T

YES

NO

(a) Dynamic network resources share policy.

Start

Check available network resources (R)

Check HTC & MTC traffic flow requests (T) & (S)

Set HTC & MTC resources shares (T = α R) & (S = β R)

Apply resources share policy & Set (α, β & R = R)

Serve all HTC requested GBR bearers based on the allocated

share (T)

Serve all MTC requested GBR bearers based on the allocated

share (S)

Serve all HTC requested NGBR bearers based on the residual resources from the share (T)

Serve all MTC requested NGBR bearers based on the residual resources from the share (S)

Set HTC traffic flows NQoS regarding HTC AQoS, T and α

Set MTC traffic flows NQoS regarding MTC AQoS, S and β

Prepare for next assignment

(b) Fixed network resources share policy.

Fig. 1: Traffic flow management policy setting.

the new QCIs are 71, 72 and 73, which are shown in TableIV and chosen from the spare QCI values.

TABLE IV: Proposed MTC QCIs

QCI Resource Type Priority* PDB msec PLR % Example

services

71 GBR 1 100 10−3 Real time,

alarm monitoring

72 GBR 2 200 10−3 Real time,

live monitoring

73 NGBR 3 300 10−6 Delay tolerant,

metering

* The priority is related to MTC traffic type

In Table IV, the GBR resource type has been specified to thereal time and delay sensitive MTC traffic, the NGBR resourcetype has been specified to the delay tolerant MTC traffic,the priority setting is related to the MTC traffic, in order toprovide seamlessly integration manner with the standard QCIsassigned priority values. PDB and PLR are specified to fulfilthe minimum NQoS required to provide acceptable levels toAQoS needs for MTC traffic.

B. Proposed Network Architecture

Fig.2 shows the proposed network architecture for bridgingthe WSN devices with the EPS network. The integration isachieved via a WSN gateway, which is in turn connected tothe WSN base stations or coordinators.

BS3

Access Network LTE-A

PDN

BSn

WSN gateway

BS2

BS1

Application Server

MTD

Fig. 2: The proposed network architecture.

The infrastructure of the proposed network is a combina-tion of different WSNs that use different routing protocols,including IEEE 802.15.4 (ZigBee), IPv6 over Low powerWireless Personal Area Networks (6LowPan) and the Low-Energy Adaptive Clustering Hierarchy protocol (LEACH).Each WSN has a coordinator or base station that providesthe harvested information to a WSN gateway. The gateway’sdesign, implementation and its related used protocols areexplained in detail in the following subsection.

Page 8: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

8

C. Access Network (WSN Gateway Design)WSNs are fundamental aspect for enabling MTC moving

towards IoT technology in smart cities; they collect the datafrom the targeted areas and pass them through wireless com-munication. These networks can be connected to a higherlevel system (such as LTE operator networks) via a gateway,the main function of which is to provide connectivity andinteraction between the WSN base station and the externalPDN, transparently.

MAC

PHY

PHY

(IEEE802.3 or

IEEE802.11)

MAC

(IEEE802.3 or

IEEE802.11)

IP

PDCP

RLC

Gateway Applications

NAS

RRC/ AS

LT

E-U

u

IoT

IP

Fig. 3: The WSN Gateway protocol stack.

In the proposed scheme the WSN gateway connects a mo-bile broadband network access node with WSN IoT infrastruc-ture (as converged networks). In this paper, the methodologyused to connect a massive WSN to a PDN via mobile broadnetwork infrastructure is focused on the connectivity of theWSN traffic and QoS issues, as well as in consideration ofQoE of real time applications users. The WSN gateway servesas a dual mode node and as a protocol converter betweenWSN base station and an E-UTRAN access node. An LTE-APro eNB access node is presented that introduces LTE-A as amobile broadband network involved as a base line in this studyand for future NGN. The integration of multiple networkshas two main types, tight and loose coupling techniques ortopologies [30], with each type having its advantages anddrawbacks from a QoS point of view. Regarding the loosecoupling technique, the integration is offered by the WSNgateway, which provides dual mode connectivity between E-UTRAN and the WSNs. The protocol stack of a WSN gatewayis shown in Fig.3, where a protocol conversion is appliedbetween the WSN coordinator and LTE-A traffic flows.

The control plane is assumed to be set up already among theCN, eNB and end devices. This includes the Random AccessProcedure (RAP), the Non Access Stratum signalling (NAS),and the Radio Resource Control (RRC). After completing theset up phase, the Packet Data Convergence Protocol (PDCP)applies the Traffic Flow Template (TFT) packet filtering pro-cess and assigns bearers to its logical channel prioritisation

levels [31]. This process controls and divides the availablenetwork resources in the UL and DL directions among dif-ferent bearers/SDFs over the entire EPS network [32]. Thesekey features of controlling the QoS level of the initiated bearerbetween WSN gateway and PDN-GW are performed in PDCPlayer in the WSN gateway for the UL direction and the GPRSTunnelling Protocol User Plane (GTP-U) layer in PDN-GWfor the DL direction. The setting of the TFT filtering process,bearers IDentifier (ID), Tunnel Endpoint IDentifier (TEID) andbearers IP address are instructed between PCRF and PDN-GWin the CN.

An efficient multiplexing algorithm for MTC traffic hasbeen implemented between the WSN gateway and PDN-GW.MTC traffic is aggregated in the WSN gateway and thenmultiplexed together, which is carried out in the PDCP layerin the UL direction on the bearer level. The aggregationand multiplexing of different MTC traffic is based on newlydefined QCI values to match the required QoS levels forMTC traffic flows during their transmission via the mobilebroadband network. This WSN gateway is a low cost device;it multiplexes different MTC traffic flows on the bearer levelwithout use of any extra resources at the radio interface and inthe backhaul. Instead of requesting a new resource allocationin the UL direction from the eNB for an individual WSNnode or base station, the requesting process is performed bythe WSN gateway for a group of multiplexed service dataflows originating from MTDs, based on the QoS level. Thismultiplexing is undertaken intelligently in the PDCP layerusing TFT and packet filtering on PDCP Service Data Unit(SDU). This comes from the IP layer, representing differentMTD traffic flows and then produces a multiplexed PDCPPacket Data Unit (PDU) to be mapped to the earlier initiatedData Radio Bearers (DRBs) between the WSN gateway andPDN-GW. An overview of the proposed mechanism is shownin Fig.4.

IV. SIMULATION SETUP AND RESULTS

A. Simulation Setup

The performance of the proposed network model is studiedusing the Riverbed Modeler version 18.5.1 [24] simulator fromRiverbed Technologies Ltd.1. This version supports 3GPPstandard and the modeler library offers a variety of protocoltypes and device models for supporting accurate simulationscenarios [33]. The modifications of the standard modelermodels are carried out in order to implement the proposedsimulation setup [24].

The designed E-UTRAN consists of one macro eNB, whichis connected to EPC and PDN via an optical transport networkcomprising 7 WSN gateways and 30 active UEs randomlydistributed within the coverage area of the eNB . The WSNgateways and UEs are positioned randomly in the coveragearea of the eNB. Each WSN gateway is connected to one WSNbase-station (coordinator) via an Ethernet connection using theIEEE 802.3 protocol, which is used to convey the harvestedinformation towards the application server. The application

1Licenced Software offered by Brunel University London

Page 9: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

9

SGWeNB

(SDF/TFT) Setting,

Mapping/ De-mapping & Multiplexing/

De-multiplexing

WSN gateway PGW

Filtering, Mapping/

De-mapping & Multiplexing/

De-multiplexing,(TFT/SDF Template)

QoS Policy 1

SDF1

QoS Policy 2

SDF2

QoS Policy 3

SDF3

QoS Policy 4

SDF4

PDN

Appl

icat

ion

Serv

ers

MTC IP flow (1-1)

MTC IP flow (1-2)

MTC IP flow (1-3)

WSNn/BSn

MTD (n-1)

MTD (n-2)

MTD (n-3)

MTD (n-m)

WSN1/BS1

MTD (1-1)

MTD (1-2)

MTD (1-3)

MTD (1-m)

Fig. 4: Multiplexing process.

TABLE V: Simulation configuration parameters

Parameter SettingNo. of eNB one single sectoreNB coverage radius 2000 mMAC Layer/ PSCH Scheduling Mode Link adaptation & channel dependant scheduling

(Max. IMCS=28, IT BS=26)IP Layer QoS (Scheduling Algorithm)/ Outside EPS rules and functions Default (Best Effort)Path loss model, Fading Model and Shadow Fading Loss Model UMa (ITU-R M2135) [34]UL Multipath model LTE SCFDMA ITU Pedestrian ADL Multipath model LTE OFDMA ITU Pedestrian AUE velocity 0 km/seceNB, UE and WSN gateway receivers sensitivity -200 dBmMIMO transmission technique Spatial Multiplexing 2 Codewords 4 LayersUE and WSN gateway No. of receive antenna 4UE and WSN gateway No. of transmit antenna 1eNB No. of receive antenna 4eNB No. of transmit antenna 4Cyclic Prefix Normal (7 symbols/slot)Duplexing Mode FDDSystem BW 20 MHzPUCCH, PDCCH & PHICH configurations Modeler default valuesUL carrier frequency (operating band 1) [16] 1920 MHzDL carrier frequency (operating band 1) [16] 2110 MHzUE and WSN gateway Antennas Gain -1 dBieNB Antenna Gain 15 dBieNB, UE and WSN gateway maximum transmission power 27 dBmModulation and Coding Scheme (MCS) AdaptiveNo. of TTI 140000eNB, UE and WSN gateway PDCP Layer Max. SDU 1500 BytesPDCP layer Robust Header Compression Ratio (RoHC) UDP/IP:- uniform (0.5714, 0.7143)

TCP/IP:- uniform (0.1, 0.4)RTP/UDP/IP:- uniform (0.1, 0.15)

UL/DL HARQ maximum retransmission 3

servers are connected to the EPS network through the internetbackbone network. End device applications service request andcall setup supports the use of Session Initiation Protocol (SIP),which is configured among the UEs, WSN gateways, applica-tions servers and IP and Multimedia Subsystem (IMS) server.The simulated network contains Profile Definition, ApplicationDefinition, LTE configuration and Mobility Management en-tities. Furthermore, the path-loss, fading and shadow modelsused in the wireless links are the Urban Macro-cell ITU-RM2135 model [34]. 20 MHz has been set in the EPS RANpart with maximum radius of coverage of 2 km, in orderto provide a realistic scenario for the wireless part of thesimulated network. The typical E-UTRAN parameter settings

are listed in Table V. 10 UEs initiate real time voice callby using the VoLTE service, where 5 of them are the voicecallers and the other 5 are voice called. This service is set touse high data rate codec (G.711) with voice payload framesize of 10 msec and 80 Bytes per frame [35], this is goingto produce one digitized voice payload frame every 10 msecwith 64 Kbps rate, which is the voice application layer PDU.The consequential physical layer VoLTE PDU rate will be 96Kbps, including all the protocols overhead with RoHC [24].The VoLTE bearer traffic stream bandwidth in the physicallayer matches and occupies the maximum GBR bearer setting,that was designed for the VoLTE service, and this will producea full utilization to the voice bearer bandwidth and this verifies

Page 10: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

10

TABLE VI: MTD’s traffic characteristics

Service / Application ToS/ DSCP IP Layer Packet Size IP Layer Packet RateAlarm Excellent Effort / CS3

512 Bytes 4 Packets/secLive Monitoring & Remote Monitoring Streaming Multimedia / CS4Metering Background / CS1

TABLE VII: H2H service and application characteristics

Service/ Application Application Layers Parameter Setting

Voice

Encoder Scheme G.711Frames per Packet 1Frame Size 10 msecCompression/ Decompression Delay 20 msecToS/ DSCP Interactive Voice/ CS6

Video ConferencingFrame Interarrival Information 10 frames/secFrame Size Information (Bytes) 128x120 pixelsToS/ DSCP Streaming Multimedia/ CS4

FTP

Command Mix (Get/Total) 100%Inter Request Time Uniform, min. 27 sec, max. 33 secFile Size Constant, 500 BytesToS/ DSCP Standard/ CS2

HTTPSpecification HTTP 1.1Page Interarrival Time Exponential, 60 secToS/ DSCP Best Effort/ CS0

TABLE VIII: Bearer configuration

QCI ARP Resource Max. & GBR TFT Packet Filters Service/Type (kbps) DSCP value Application

1 1 GBR 96 CS6 VoLTE2 2 GBR 384 CS4 Video Conferencing6 6 NGBR 384 CS2 FTP7 7 NGBR 384 CS0 HTTP71 3 GBR 384 CS3 Alarm72 4 GBR 384 CS4 Live Monitoring73 8 NGBR 384 CS1 Metering5 5 NGBR 384 Modeler default setting SIP Protocol IMS Signalling9 10 NGBR 384 BE Low Priority Application & Undefined DSCP

that G.711 is a resource hungry codec. The size of 10 msecvoice frame was chosen to support the QoE estimation processof VoLTE service with the required parameters. Another 10UEs initiate real time video call by using a video conferencingservice, where 5 of them are the callers and the rest are called.Regarding the remaining 10 UEs, 5 uses the FTP applicationto communicate with remote server 1 and the other 5 utilisesthe HTTP application to communicate with remote server 2.The H2H Services and application characteristics are listed inTable VII.

The 7 WSN gateways receive traffic generated from theMTDs, which is routed via the WSN coordinators. EachMTD is programmed to generate its traffic as a time basedevent activation. The rate of activation of the MTDs at eachWSN platform is 5 MTD per second in an accumulatingfashion, starting at 135 seconds and continuing until the endof the simulation. By the end of the simulation, each basestation and WSN gateway will have received and routed thetraffic of 500 MTDs. The metering MTDs have an 80%share of the generated traffic, while alarm and live monitoringMTDs have 10% for each of them. The MTDs generatedtraffic is proposed as being created from one of following

WSN protocols: Constant Bit Rate Routing Protocol (CBRP),LEACH or ZigBee and the MTDs traffic characteristics arelisted in Table VI, which are related to MTDs traffic settingsin [36]. Moreover, the bearer configuration settings in theRiverbed modeler are listed in Table VIII.

Two identical and separated simulation scenarios were buildin a single Riverbed modeler project in order to facilitate thescenarios run sequencing process as well as the comparisonsprocedures of the measurements. The first one was pertainedto considering MTC traffic as a best effort traffic flow, whereasthe second was about using the proposed policy for traffic flowmanagement. Each scenario was run for 30 times sequentiallywith randomly chosen seeds [37] [38], since producing datafrom a single execution cannot give a correct picture ofthe performance of the framework [39]. Furthermore, thesescenarios run replications were set with multi-seed valuesettings of the random number initializer, in order to get anoutcome with a confidence interval of 95% as an estimatedvalidity of the results, which is the resultant from abovementioned replications count [24] [37] [39]. Specifically, thisapproach is designed in Riverbed Modeler software to supportconfidence estimation for scalar data reporting [24] [40], which

Page 11: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

11

EPC MTC_Application_Server

Profiles ApplicationsUE_1

LTE_Configuration

Macro_eNB

Internet_Backbone_Network

UE_3 UE_5

UE_7 UE_9UE_8

UE_2 UE_4

UE_6 UE_10

UE_11 UE_12 UE_13 UE_14 UE_15

UE_16 UE_17 UE_18

WSN2

WSN1

WSN3

WSN5

WSN4

UE_19 UE_20

UE_21 UE_22 UE_23 UE_24 UE_25HTC_FTP_Server1

HTC_HTTP_Server2

UE_26 UE_27 UE_28 UE_29 UE_30

IMS

WSN6

WSN7

WSN Gateway 1

WSN Gateway 2

WSN Gateway 3

WSN Gateway 4

WSN Gateway 5

WSN Gateway 6

WSN Gateway 7

Fig. 5: Riverbed Modeler network setup.

is collected in multi-seed parametric runs. The confidenceinterval marker is not shown in the results figures, in orderto emphasis and highlights the assessment concerning theaforementioned two scenarios. The designed baseline EPSnetwork model is shown in Fig.5.

B. Simulation Results

Two Riverbed Modeler simulation scenarios with the pro-posed WSN gateway have been conducted in order to evaluateand validate the performance of the suggested system. For bothscenarios, the simulation run time was set at 240 seconds. Thistime was specified to match the estimated traffic capacity ofthe Physical Uplink Shared CHannel (PUSCH), with respectto the number of the attached devices and their aggregatedUL traffic loads rates at the eNB. Furthermore, to inspecthow the conducted simulation of proposed traffic managementpolicy on the bearers level provoked at the physical layer level.In this regard PUSCH utilisation and throughput, once thebearers are starting the competition when there is a scarcityin the network resources. UL capacity mainly depends onthe proposed physical layer structure parameters, PUCCHand Random access configurations as shown in Table V ofeNB, UEs and WSN gateways. When the UL traffic loadsreach the estimated UL capacity, it will be shared among thenumber of attached devices to the eNB and their UL trafficloads. The PUSCH capacity represents the bottleneck in theLTE-A networks [41]. According to [24], [42] and [26], theestimated data plane PUSCH traffic capacity is 73.05 Mbps perTTI. Whereas, the planned maximum aggregated data planePUSCH traffic loads, of the services and applications in thesimulation scenarios, they will achieve 70.7 Mbps per TTIat 235 seconds. The reminder 2.35 Mbps per TTI are left forIMS, SIP, SDP, RTP, RTCP and other application layers controlsignalling traffics, such as Transport, Session, Presentationand Application layers. Since the latter protocols, services,applications and layers traffic loads are measures of data plane

traffic loads apart from LTE network control plane signallingoverheads such as NAS and RRC [14].

To provide an acceptable estimation of the long term per-formance measure for the proposed framework. The generatedtraffic, from applications used by any node in the simulationmodels, is only created when the application is active. Thedefault setting of Riverded Modeler requires a network warm-up time at the beginning of the simulation experiment [24],which is typically set to be 100 seconds [43]. This timeis important in any network simulation model run for thefirst time from an empty scenario to allow queues and othersimulated network parameters reach a steady-state runningconditions. In addition, it allows other devices such as routersand servers in the simulated scenario to initialize. Therefore,100 seconds warm-up time provides a sufficient amount oftime for all the network protocols to initialize and converge totheir stable states [43] [37]. The RAPs and attaching processesof the HTDs and WSN gateways to the EPS were donebetween 100 to 105 seconds of the simulation run time, onthe random basis with different time instances for each device.The HTC services request and the applications starting timeswere set within 105 to 110 seconds of the simulation run time.The MTC services and applications traffic flows starting andinitialization times were set at 135 seconds as mentioned insection IV-A.

Fig.6 shows the total records of the PUSCH utilization per-centage, aggregated PUSCH throughput and Physical Down-link Control CHannel (PDCCH) utilization percentage of theeNB respectively during the simulation runs. In Fig.6a andFig.6b, the proposed model gave a better utilization percentageand a good throughput, respectively. The default model utilized100% of PUSCH at 174 seconds, with maximum aggregatedPUSCH throughput of 37 Mbps per TTI. While the proposedmodel achieved 100% PUSCH utilization at 228 seconds,with maximum aggregated PUSCH throughput of 64 Mbpsper TTI. At 174 seconds there were 1400 active MTDs,

Page 12: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

12

0

10

20

30

40

50

60

70

80

90

100

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Uti

lisa

tio

n %

Time (sec)

Default Proposed

(a) PUSCH utilization.

0

5

10

15

20

25

30

35

40

45

50

55

60

65

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Th

rou

gh

pu

t (M

bp

s)

Time (sec)

Proposed Default

(b) PUSCH throughput.

0

10

20

30

40

50

60

70

80

90

100

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Uti

liza

tio

n %

Time (sec)

Default Proposed

(c) PDCCH utilization.

Fig. 6: eNB Performance.

while at 228 seconds were 3290 active MTDs. The PDCCHreaches full capacity utilization in the default model at 228seconds of simulation run time, while in the proposed modelperforms 95% utilization at congestion condition, as shown

in Fig.6c. It is clear from these physical layer KPIs theeNB has got an improvement in PUSCH utilization, PUSCHthroughput and PDCCH utilization during congestion, fullyor over utilization links, besides normal traffic conditions.The E2E NQoS performance measurements will show us theimpact of the proposed traffic flow management policy on theHTCs and MTC’s traffic flows, as demonstrated in the nextparagraphs.

Fig.7 demonstrates the results for the MTC traffic. Fig.7arepresent the percentage of the PLR of the received trafficfrom all active MTDs with respect to their number in thewhole network for the default and proposed models. Thisfigure shows a comparison of PLR for these two models and itclearly shows a significant reduction in the packet loss levelsfor that proposed.

Fig.7b demonstrates one way E2E packet delay betweenone of the WSN coordinators and the MTC application serverat the IP layer, with respect to the simulation run time. Thedefault network model E2E packet delay increased as thetraffic generated from newly activated MTDs, they were joinedthe network cumulatively, the delay was 100 msec at 175 secsof simulation run time, then it was increased dramatically afterthat time and reached 2016 msec. While the proposed networkmodel gave a significant reduction in the received E2E packetdelay levels, where the E2E packet delay range between 11.5and 15 msec.

Fig.7c shows the average IP packet E2E delay variation inthe default and proposed models. This metric was measuredin the IP layers of one of the WSN base station (labelled:WSN2) and the MTC application server. The default modelgave a maximum average IP E2E delay variation of 615 msec,while for the proposed model, this metric was between 1.87msec and 4.856 msec. The delay variation is very important,since it affects the monetisation and in sequence delivery ofdata streams to the transport layer on the receiver side.

The simulation results of the VoLTE service are shown inFig.8, which includes PLR as one of the important metricsof NQoS of VoLTE. It measures the error rate in the packetflow stream, which is forwarded to the voice application bythe transport layer. Fig.8a shows the average E2E PLR inthe packets between two calling parties. This figure showsan increase in the PLR value with respect to simulation runtime in the default and proposed models. The proposed modelsPLR was more than the default one, with the maximum valuesbeing 1.81% and 1.61%, respectively.

The PLR per voice bearer in both scenarios remained lowerthan the maximum allowable limit, which is 1% for the voiceapplication bearer as presented in Table I [20]. The measuredvalue represents the total accumulated E2E PLR in the UL andDL directions, while the maximum defined voice bearer PLRlimit represents a one direction PLR. Therefore, the average ofthe maximum achieved E2E PLR per voice bearer (UL and DLvoice bearers) of the proposed model is 0.905%, whereas forthe default model is 0.805%, which is a good level, since mostof the up-to-date voice codecs perform well up to 1% PLR[44]. The irregular variation of the PLR during the simulationrun time was due to the radio interface environment variation(path loss and fading effects in the UMa model) between eNB

Page 13: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

13

0

5

10

15

20

25

30

35

40

45

50

55

60

65P

LR %

#WSDs

Default Proposed

(a) PLR of MTC’s traffic.

10

100

1000

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

De

lay

(mse

c)

Time (sec)

Default Proposed

(b) E2E packet delay of MTC’s traffic.

1

10

100

1000

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

DV

(m

sec)

Time (sec)

Default Proposed

(c) PDV of MTC’s traffic.

Fig. 7: Simulation results for MTC traffic.

and the calling parties as well as the increased traffic loads,during the simulation runs. The consequence of increasedtraffic loads was throttling the UL, which created congestionin the PUSCH. VoLTE service packet delay is called mouth

to ear delay, which is one of the most important factors tobe considered when dealing with the QoS of real time voiceservice in PS, such as the VoLTE service in our network model.Fig.8b shows the average E2E packet delay of the VoLTE inthe default and proposed network models.

The default model average E2E packet delay started at114.58 msec and reached 116.9 msec at 150 sec of thesimulation run time, then remaining at this value until theend of the simulation run. Whereas the proposed networkmodel kept its average E2E packet delay at 122±0.1 msecfrom the start of the VoLTE service traffic flow until theend of the simulation run. This measure was taken as E2E,which means it was in the UL/DL directions. Obviously, therewas a little degradation in this NQoS, because the proposedmodel gave higher delay values (about 7 msec) than the defaultmodel. These values of average E2E delay in the default andproposed models would still fulfil the 3GPP and the ITU-Rstandardisations as well as the requirements for VoLTE serviceE2E packet delay, where the maximum allowable one waypacket delay is 150 msec in order to accomplish high QoE[25], [45] and [46] .

Packet jitter is one of the most important QoS metrics ofreal time services. Fig.8c shows the average jitter of the VoLTEservice in the proposed and default models. In this figure, thedefault network model gave a better packet jitter performancethan the proposed network model. The average packet jittervalues of the default model were between -0.00844 msec and-0.000974 msec for the received packets of the VoLTE servicetraffic up until the end of simulation run time. Whilst theproposed model’s average packet jitter value for the VoLTEservice traffic flow started at -1.7 msec and by the end of thesimulation it had decreased to -0.0326 msec. Clearly, there isdegradation in this NQoS metric, for a consequence of theincreased average packet jitter will be an increase in the timedelay of the received packets in the de-jitter buffer [47]. Thevalue of the average packet jitter in the proposed model wasreduced. The ITU-R has recommended 25 msec jitter as anacceptable value for the delay variation [48].

The users QoE levels of real time voice call are a measureof users satisfaction levels when engaging with the VoLTEservice. The MOS is a well-known metric to measure QoEby using its E-Model, according to ITU-T G107 [49]. ThisQoE metric can be reported as sampled, interval or cumulativemetrics, using a modified instantaneous E-Model [50] [51].MOS statistics are collected using instantaneous E-Model, itwas set in the simulation models to report MOS value as asample mean of 100 MOS instantaneous sampled statisticsevery 2 seconds, which is called bucket mode [24]. Each oneof the 100 MOS instantaneous sample statistic is collectedwithin 20 msec time interval. Whereas, the voice codec framesize is set to treat 1 digitized voice frame every 10 msec,meaning that the voice traffic has an inter-arrival time of 10msec. Therefore, the MOS instantaneous sampled statisticsanalysis relies on 2 current and 2 previous digitized voiceframes QoS metrics, such as received voice application, framedelay and error rate [51] [52]. Fig.8d shows the MOS of thedefault and the proposed network models. The default modelstarted with MOS of 4.3532 then dropped to 4.3518, whereas

Page 14: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

14

0.25

0.5

0.75

1

1.25

1.5

1.75

2

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

PLR

%

Time (sec)

Default Proposed

(a) PLR of the VoLTE service.

114

115

116

117

118

119

120

121

122

123

124

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

De

lay

(mse

c)

Time (sec)

Default Proposed

(b) E2E packet delay of the VoLTE service.

-1.8

-1.6

-1.4

-1.2

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Jitt

er

(mse

c)

Time (sec)

Default Proposed

(c) Packet jitter for the VoLTE service.

4.345

4.346

4.347

4.348

4.349

4.35

4.351

4.352

4.353

4.354

4.355

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

MO

S

Time (sec)

Default Proposed

(d) MOS of the VoLTE service.

Fig. 8: Simulation results for VoLET service.

the proposed network model started at MOS of 4.347 thenincreased to 4.349. Moreover, the levels of QoE representedby the MOS value, maintained within good acceptable levelsduring the whole voice call time for both models.

A Video conferencing service over LTE QoS parametersrepresented by PLR, packet delay and packet jitter shouldbe assured at certain levels in order to maintain a goodQoE. Fig.9a demonstrates the PLR for the default baselineand proposed network models with respect to the simulationrun time. This QoS metric was measured in the receivedpacket stream forwarded by the transport layer to the videoconferencing application. The default model average PLR atthe start of traffic flow at 108 sec was 4.231% and thisincreased to 92.8% at the end of the simulation. Whilst theproposed network model was started with a PLR of 0.846%,which then increased to 8.3% at the end of the simulation.

Fig.9b shows the average E2E packet delay of the videoconferencing service packet stream forwarded to the applica-tion layer from the transport layer at the destination point. Inthis figure, the QoS metric is represented by the average E2Epacket delay for the default and proposed network models.

The measurement starts from 108 sec of simulation run time,when the video conferencing service starts its traffic flow, untilits end. The proposed network model gave a better averagepacket delay performance than the default one during thewhole period of the simulation run, especially at the end ofsimulation, which represents the heavy load traffic period. Inthis part, the proposed network model gave a better packetdelay performance than the default one, since the maximumaverage packet delay in the proposed network model wasstarted with 24.9 msec and ended with 36.5 msec. Whereas,for the default network model was kept on 35 msec, then at228 seconds it was shot up and reached 256.5 msec at the endof simulation runs.

Another important QoS factor that has an impact on theinteractive real time video conferencing service’s QoE, is thePDV. Fig.9c shows the Average PDV (APDV) with respectto the simulation run time of the video conferencing servicefor the received packets stream, which was forwarded fromthe transport layer towards the application layer at the calledend device. This APDV was measured between the two videocalling parties. In the Fig.9c, it can be seen that the proposed

Page 15: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

15

0

10

20

30

40

50

60

70

80

90

100

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

PLR

%

Time (sec)

Default Proposed

(a) PLR of the Video conferencing service

10

35

60

85

110

135

160

185

210

235

260

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Del

ay (

mse

c)

Time (sec)

Default Proposed

(b) E2E packet delay of the video conferencing service

0.05

0.18

0.30

0.43

0.55

0.68

0.80

0.93

1.05

1.18

1.30

1.43

1.55

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Del

ay V

aria

tio

n (

mse

c)

Time (sec)

Default Proposed

(c) PDV of the video conferencing service

Fig. 9: Simulation results for video conferencing service.

network gave a higher APDV between 0.144 msec and 0.167msec up to the end of the simulation run time. However, thedefault network model was started with 0.2458 msec and then,it gave a lower APDV of 0.2228 msec. Furthermore, at point227 sec of simulation run time, the APDV increased rapidlyand reached 1.457 msec by the end of the simulation. It isclear that the default models APDV increased rapidly, when

the network was under a heavy traffic load, which could bea serious issue on the receiver side for real time playbackvideo application [47]. The improvement of the video NQoSmetrics in the proposed model over the default model is relatedto the GBR video bearer setting and allocation, such as QCIand ARP. It is also related to the use of the proposed trafficmanagement policy, which is set as fixed share represented byα function (as listed in Table III). Its impact was clear whenthe eNBs UL has been fully utilized by HTC and MTC traffics.In the default model the video bearers NQoS metrics such asPLR, packet delay and PDV was kept within a specific range.However, at 228 seconds the capacity of PDCCH utilizationin the default model reaches 100% as shown in Fig.6c. Alltraffics now compete for the PDCCH space. Therefore, theaforementioned NQoS metrics in default model it were shot uprapidly at the end of simulation, while in the proposed modelit were kept within acceptable levels, as shown in Fig.7, Fig.8and Fig.9.

Fig.10 shows the results obtained of the FTP and HTTPapplications in terms of E2E packet delay, PDV and TCPretransmission metrics. FTP NQoS metrics has been inspectedin the FTP application server as a destination point at thetransport layer. The FTP as a non-real time service and ituses the TCP retransmission mechanism between the sourceand destination points. Fig.10a shows the average E2E packetdelay for all packets with respect to the simulation run time,which was sent from the UEs that using FTP application,for the default and proposed network models. The proposednetwork model gave a higher packet delay in comparison tothe default network model. Specifically, the proposed networkmodels average packet delay was between 13.74 msec and14.04 msec, whereas that for the default model was between12.76 msec and 13.47 msec.

E2E delay variation represents another important QoS factorfor the FTP application and Fig.10b shows the average forpackets at the IP layer between one of the UEs and the FTPapplication server. It is clear that the proposed network modelsdelay variation was higher than the default network model.The maximum average for the former was 9 msec, whilethat for the latter was 7.5 msec. As was mentioned before,the FTP application will use the TCP retransmission requestmechanism, which could be reflected as lost or outdatedpackets at the transport layer.

Fig.10c demonstrates the average TCP retransmission countwith respect to simulation run time for the default andproposed network models. It shows that, with the defaultmodel, no retransmission occurred during the FTP applicationtraffic flow period. While for the proposed network modelit did happen, which means that either there was a networkcongestion or a loss of packets received in the transport layer.

HTTP service QoS factors were checked at the HTTPApplication server in the transport and IP layers. This is a non-real time service, which uses the TCP retransmission mech-anism between the user and the application server. Fig.10dshows the average E2E packet delay for all received packetsat the transport layer with respect to the simulation run time,which has been sent from the UEs that use the HTTP serviceapplication, for the default and proposed network models. The

Page 16: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

16

12.5

12.7

12.9

13.1

13.3

13.5

13.7

13.9

14.1

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

De

lay

(mse

c)

Time (sec)

Default Proposed

(a) E2E packet delay for the FTP application

5.25

5.75

6.25

6.75

7.25

7.75

8.25

8.75

9.25

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

De

lay

Va

ria

tio

n (

mse

c)

Time (sec)

Default Proposed

(b) Packet delay variation of the FTP application

-0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

1.2

1.3

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Ave

rag

e R

etr

an

smis

sio

n C

ou

nt

Time (sec)

Proposed Default

(c) TCP retransmission count of the FTP application

16.5

17.5

18.5

19.5

20.5

21.5

22.5

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

De

lay

(mse

c)

Time (sec)

Default Proposed

(d) E2E packet delay for the HTTP application

1

3

5

7

9

11

13

15

17

19

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

De

lay

Va

ria

tio

n (

mse

c)

Time (sec)

Default Proposed

(e) Packet delay variation of the HTTP application

-1

1

3

5

7

9

11

13

15

17

19

100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

Ave

rag

e R

etr

an

smis

sio

n C

ou

nt

Time (sec)

Default Proposed

(f) TCP retransmission count of the HTTP application

Fig. 10: Simulation results for the FTP and HTTP applications

proposed network model gave almost a constant average delay,whilst the default network model did not.

The proposed network models average packet delay wasbetween 19.6 msec and 18.6 msec, whereas that for default

network model was between 16.8 msec and 22 msec. E2Edelay variation is another important QoS factor for the HTTPservice and is shown in Fig.10e. This figure shows the averageE2E delay variation of received packets at the IP layer between

Page 17: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

17

one of the UEs and the HTTP service application server. Theproposed network models delay variation was higher than forthe default network model. The maximum average E2E delayvariation of the former was 16.8 msec, while that for the latterwas 13.5 msec.

Fig.10f demonstrates the average TCP retransmission countwith respect to simulation run time for the default and pro-posed network models. It shows that, in the default model, noretransmission happened during the HTTP application trafficflow. While in the proposed network model this did occur,which reflects that either there was a network congestion or aloss of packets received in the transport layer.

V. DISCUSSION

NQoS is a measure of the reliability and performanceof the network nodes and links. It is a composite metric,being based on various factors that specify the characteristicsof the network situation and consequently, deteriorations orimprovements in the NQoS level can be brought about throughsome combined factors so as to fulfil AQoS requirements.

The proposed traffic management policy, with the assistanceof WSN gateway, achieved high QoS, as presented by all theNQoS measurements of the MTC and HTC traffic flows inthis work at the IP and transport layers. The eNB performanceachievement in terms of the UL and DL capacity enhancementwas in PUSCH utilization, PUSCH throughput and PDCCHutilization. In the proposed model the PUSCH utilizationwas reduced on average by 33.6% over the default model.The PUSCH throughput of the proposed model was achieved73.2% increase over the default model, specifically whencongestion was arisen. Another impact of the proposed flowmanagement policy, it was on the PDCCH utilization achieve5% decrease at congestion condition over the default model.Furthermore, an increase of 179% in the radio access nodeconnectivity, from the number of active connected MTDs pointof view. The achieved NQoS enhancement was in terms ofthe radio access node physical layer performance parametersas well as of the MTC traffic flows exhibiting a decrease of99.45% in the PLR, a decrease of 99.89% in the packet E2Edelay, a decrease of 99.21% in the PDV.

Despite there being an increase of 12.42% with respect tothe default baseline model occurred in terms of the VoLTEservice as represented by its PLR, several offsetting achieve-ments were delivered by the proposed traffic managementpolicy, including a decrease of 9.5% in the PLR with respectto the 3GPP maximum PLR allowable limit for the VoLTEservice, an increase of 4.5% from the baseline model forthe packet E2E delay, a decrease of 18.79% from standard3GPP maximum standard packet E2E delay, a reduction of99.87% from the standard 3GPP maximum jitter value forthe packet jitter and a decrease of 0.074% in terms of QoEfrom the default model values, with an increase of 8.73% fromthe standard values. The E2E VoLTE NQoS are the PLR,packet delay and packet jitter, as a composite metrics. Thedegraded level of them can be improved by using a lowerrate voice codec or adaptive multi-rate voice codec at theVoLTE application layer level [53]. It can be also improved by

employing a scheduling algorithm at the physical layer level,specifically for VoLTE service bearer, such as TTI bundlingand Semi-Persistent Scheduling (SPS) [54]. These schedulingtechniques are exploiting processing delay difference betweenapplication layer voice frame generation process and physicallayer transmission procedure [55].

Another NQoS improvement was for video conferencing,specificaly at congestion condition, where a reduction of85.75% regarding PLR, 85% of packet delay and 88.5% ofPDV were achieved, when compared with the default modelvalues.

The HTTP service NQoS achievement regarding packet E2Edelay was a decrease of 13.31% and the PDV provided anincrease of 37.26% to those of the baseline model. In spite ofthe obtained increment in the PDV, it represents a decrease of52.26% from the standard maximum specified value.

FTP service NQoS delivered packet delay and delay varia-tion increases of 9.12% and 20.9%, respectively, when com-pared to the default model. Compared with the standard spec-ified maximum allowable limits, the proposed policy achieveda rapid decrease of 95.36% in the packet delay and a reductionof 63.71% in delay variation.

Finally, the proposed model achieved the specified bearersettings for the HTDs and MTDs. Moreover, the QoE ofthe real time application users of the VoLTE service wasmaintained within the satisfaction level, as measured by the E-Model’s MOS scale. While the QoE of the video conferencingservice has been attained, in the perspective of obtainableNQoS metrics compared to the standard AQoS requirementof this service.

VI. CONCLUSION AND FUTURE WORK

This paper has addressed the challenges of heterogeneity byintegrating HTC and IoT traffic through their mutual interac-tion within network elements (nodes and links), in terms ofthe resultant and distinct E2E NQoS metrics. The motivationfor the paper was based on the fact that the existing LTE-APro PCC rules and procedures do not differentiate betweenthe user type of the terminal devices, which produces a lack ofNQoS levels being offered and hence, potentially not meetingthe AQoS demands made by the used service.

An access WSN gateway has been proposed to provide E2Econnectivity for MTCs traffic flow within the licence spectrumof LTE-APro media. An approach of protocol conversion andfacilitation of the control of the EPS network resources QoSlevel assigned for MTC traffic flows, in the context of PCCfunctions and rules, has been provided. A policy for trafficflow management that defines MTC traffic flows over the EPSnetwork has been put forward, in order to use a fractionof the available network resources in a seamless manner,accompanied by a new specific group of QCIs, assigned todefine the QoS levels of the MTC bearers connectivity. Theallocation of network resources for MTC traffic flows are tunedbased on terminal type needs, i.e. on the level of importanceof the traffic type sent by MTD. In addition, The QoE of realtime application human users is assured within the perceivedquality levels.

Page 18: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

18

The simulation results have shown that the proposed policyof traffic flow management within the LTE-APro networkinfrastructure outperforms the current one in terms of NQoSlevels, not only for HTCs traffic flows but also, for those ofMTCs. However, in terms of the VoLTE service, it some degra-dation was encountered with the offered NQoS, due to the useof high rate codec in the application layer and the absence ofa specific scheduling procedure to the VoLTE bearer at theradio interface. Nevertheless, the VoLTE service QoE levelwas retained within the recommended standardisation level.

The future work is matching the scope of this work, and inthe context of the heterogeneity of NGN, the key driver willbe the use of Network Functions Virtualization (NFV) andSoftware Defined Network (SDN) to set the proposed policyin the Network Operating System (NOS) layer as an engineto drive the control layer elements (controllers) in order toupdate the QoS mechanisms in the network elements, suchas gateways in the CN as well as Central Offices (COs) inCentralized RAN (C-RAN).

REFERENCES

[1] B. W. Chen, W. Ji, F. Jiang, and S. Rho, “QoE-Enabled Big Videostreaming for Large-scale heterogeneous Clients and Networks in SmartCities,” IEEE Access, vol. 4, pp. 97–107, 2016.

[2] S. Yinbiao, P. Lanctot, and F. Jianbin, “Internet of Things: wireless sen-sor networks,” White Paper, International Electrotechnical Commission,http://www. iec. ch, 2014.

[3] J. Eckenrode, “”The derivative effect: How financial servicescan make IoT technology pay off”,” 2015. [Online]. Avail-able: https://dupress.deloitte.com/dup-us-en/focus/internet-of-things/iot-in-financial-services-industry.html?coll=11711

[4] F. Ghavimi and H.-H. Chen, “M2M communications in 3GPP LTE/LTE-A networks: architectures, service requirements, challenges, and appli-cations,” IEEE Communications Surveys & Tutorials, vol. 17, no. 2, pp.525–549, 2015.

[5] S. N. K. Marwat, T. Potsch, Y. Zaki, T. Weerawardane, and C. Gorg,“Addressing the challenges of E-healthcare in future mobile networks,”in Meeting of the European Network of Universities and Companiesin Information and Communication Engineering. Springer, 2013, pp.90–99.

[6] L. Costantino, N. Buonaccorsi, C. Cicconetti, and R. Mambrini, “Per-formance analysis of an LTE gateway for the IoT,” in World of Wireless,Mobile and Multimedia Networks (WoWMoM), 2012 IEEE InternationalSymposium on a. IEEE, 2012, pp. 1–6.

[7] A. Lo, Y. W. Law, and M. Jacobsson, “A cellular-centric servicearchitecture for machine-to-machine (M2M) communications,” IEEEwireless communications, vol. 20, no. 5, pp. 143–151, 2013.

[8] D. Lee, J.-M. Chung, and R. C. Garcia, “Machine-to-Machine com-munication standardization trends and end-to-end service enhancementsthrough vertical handover technology,” in 2012 IEEE 55th InternationalMidwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2012,pp. 840–844.

[9] S. Kang, W. Ji, S. Rho, V. A. Padigala, and Y. Chen, “Cooperative mobilevideo transmission for traffic surveillance in smart cities,” Computers &Electrical Engineering, vol. 54, pp. 16–25, 2016.

[10] S. N. K. Marwat, Y. Zaki, J. Chen, A. Timm-Giel, and C. Goerg,“A novel machine-to-machine traffic multiplexing in LTE-A systemusing wireless in-band relaying,” in International Conference on MobileNetworks and Management. Springer, 2013, pp. 149–158.

[11] J. Zhang, L. Shan, H. Hu, and Y. Yang, “Mobile cellular networks andwireless sensor networks: toward convergence,” IEEE CommunicationsMagazine, vol. 50, no. 3, pp. 164–169, 2012.

[12] J.-P. Bardyn, T. Melly, O. Seller, and N. Sornin, “IoT: The era ofLPWAN is starting now,” in European Solid-State Circuits Conference,ESSCIRC Conference 2016: 42nd. IEEE, 2016, pp. 25–30.

[13] D. Flore, “3GPP Standards for the Internet of Things,” QualcommTechnologies Inc., 2016.

[14] E. Soltanmohammadi, K. Ghavami, and M. Naraghi-Pour, “A surveyof traffic issues in machine-to-machine communications over lte,” IEEEInternet of Things Journal, vol. 3, no. 6, pp. 865–884, Dec 2016.

[15] R. P. Jover and I. Murynets, “Connection-less communication of IoTdevices over LTE mobile networks,” in Sensing, Communication, andNetworking (SECON), 2015 12th Annual IEEE International Conferenceon. IEEE, 2015, pp. 247–255.

[16] C. W. Johnson, LTE in Bullets. Northampton, England : Chris Johnson,c2012, 2010.

[17] K. M. Koumadi, B. Park, and N. Myoung, “Introducing the Latest 3GPPSpecifications and their Potential for Future AMI Applications,” KEPCOJournal on Electric Power and Energy, vol. 2, no. 2, pp. 245–251, 2016.

[18] I. Abdalla and S. Venkatesan, “A QoE preserving M2M-aware hy-brid scheduler for LTE uplink,” in Mobile and Wireless Networking(MoWNeT), 2013 International Conference on Selected Topics in. IEEE,2013, pp. 127–132.

[19] L. Galluccio, S. Milardo, G. Morabito, and S. Palazzo, “SDN-WISE:Design, prototyping and experimentation of a stateful SDN solutionfor wireless sensor networks,” in 2015 IEEE Conference on ComputerCommunications (INFOCOM). IEEE, 2015, pp. 513–521.

[20] ETSI, “Digital cellular telecommunications system (Phase 2+) (GSM);Universal Mobile Telecommunications System (UMTS); LTE; Policyand charging control architecture,” ETSI TS 123 203 V13.6.0 (2016-03)Technical Specification, 2016.

[21] M. Chiang and T. Zhang, “Fog and IoT: An overview of researchopportunities,” IEEE Internet of Things Journal, vol. 3, no. 6, pp. 854–864, 2016.

[22] N. Seitz, “ITU-T QoS standards for IP-based networks,” IEEE Commu-nications Magazine, vol. 41, no. 6, pp. 82–89, 2003.

[23] I. Std, “Recommendation y. 1541,,” Network performance objectives forIP-based services, Feb, 2006.

[24] O. Modeler, “Riverbed Technology,” Inc. http://www. riverbed. com,2016.

[25] H. Ekstrom, “QoS control in the 3GPP evolved packet system,” IEEECommunications Magazine, vol. 47, no. 2, pp. 76–83, 2009.

[26] S. Albasheir and M. Kadoch, “Enhanced control for adaptive resourcereservation of guaranteed services in LTE networks,” IEEE Internet ofThings Journal, vol. 3, no. 2, pp. 179–189, 2016.

[27] N. A. Ali, A.-E. M. Taha, and H. S. Hassanein, “Quality of Service in3GPP R12 LTE-Advanced,” IEEE Communications Magazine, vol. 51,no. 8, pp. 103–109, 2013.

[28] L. Koroajczuk, “Workshop presentation notes, How to Dimension usertraffic in 4G networks,” February 2014.

[29] X. An, F. Pianese, I. Widjaja, and U. Gunay Acer, “DMME: Adistributed LTE MME,” Bell Labs Technical Journal, vol. 17, no. 2,pp. 97–120, 2012.

[30] G. Crosby and F. Vafa, “A novel dual mode gateway for wirelesssensor network and LTE-A network convergence,” INTERNATIONALJOURNAL OF ENGINEERING RESEARCH AND INNOVATION, vol. 5,no. 2, 2013.

[31] G. Horvath, “End-to-End QoS management across LTE networks,”in Software, Telecommunications and Computer Networks (SoftCOM),2013 21st International Conference on. IEEE, 2013, pp. 1–6.

[32] T. Innovations, “Lte in a Nutshell:Protocol Architecture,” White paper,2010.

[33] A. M. Ghaleb, E. Yaacoub, and D. Chieng, “Physically separated uplinkand downlink transmissions in LTE HetNets based on CoMP concepts,”in Frontiers of Communications, Networks and Applications (ICFCNA2014-Malaysia), International Conference on. IET, 2014, pp. 1–6.

[34] H. Holma and A. Toskala, WCDMA for UMTS: HSPA evolution andLTE. John Wiley & Sons, 2010.

[35] Cisco. (2016) Voice Over IP - Per Call Bandwidth Consumption.[Online]. Available: http://www.cisco.com/c/en/us/support/docs/voice/voice-quality/7934-bwidth-consume.html

[36] S. J. M. Baygi, M. Mokhtari et al., “Evaluation performance of protocolsLEACH, 802.15. 4 and CBRP, using analysis of QoS in WSNs,” WirelessSensor Network, vol. 6, no. 10, p. 221, 2014.

[37] S. O. Dr. Robert. (2014) Lectures: CSCI 4151 Systems Simulation Class.[Online]. Available: http://www.robertowor.com/csci4151/lecture5.htm

[38] N. Ince and A. Bragg, Recent Advances in Modeling and SimulationTools for Communication Networks and Services. Springer US, 2007.[Online]. Available: https://books.google.co.uk/books?id=pJSqZZmc9\oC

[39] Riverbed Technology. (2017) verifying statistical validity of discreteevent simulations. [Online]. Available: http://enterprise14.opnet.com/4dcgi/CL SessionDetail?ViewCL SessionID=3172,White Paperfrom OPNET\Verifying Statistical Validity DES.pdf

[40] OpNet Modeling Concepts, 2007. [Online]. Available: http://suraj.lums.edu.pk/∼te/simandmod/Opnet/07%20Simulation%20Design.pdf

Page 19: IoT Traffic Management and Integration in the QoS …bura.brunel.ac.uk/bitstream/2438/15432/4/Fulltext.pdf · KPI Key Performance Indicator KQIs Key Quality Indicators LPWAN Low

19

[41] ROHDE&SCHWARZ. (2017) 1MA272: Testing LTE-A Releases11 and 12. [Online]. Available: https://www.rohde-schwarz.com/us/applications/testing-lte-a-releases-11-and-12-application-note56280-307009.html

[42] 3GPP TS 36.213 V14.2.0 (2017-03). [On-line]. Available: https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2427

[43] A. Sethi and V. Hnatyshin, The Practical OPNET User Guide forComputer Network Simulation. Taylor & Francis, 2012. [Online].Available: https://books.google.co.uk/books?id=3E3wqbSoHQQC

[44] AT&T, Orange, Telefonica, TeliaSonera, Verizon, Vodafone, Alcatel-Lucent, Ericsson, Nokia Siemens Networks, Nokia, Samsung and SonyEricsson, “One Voice; Voice over IMS profile,” vol. 1. 0. 0, 2009-11.

[45] “Technical Specification Group Services and System Aspects; Policyand charging control architecture (Release 10),” 3GPP TS 23.203, vol.10.6.0, March 2012.

[46] M. Tabany and C. Guy, “An End-to-End QoS performance evaluationof VoLTE in 4G E-UTRAN-based wireless networks,” in the 10thInternational Conference on Wireless and Mobile Communications,2014.

[47] Cisco, “Cisco Networking Academy Program: IP Telephonyv1.0,” 2005. [Online]. Available: http://www.hh.se/download/18.70cf2e49129168da015800094781/1341267715838/7 6 Jitter.pdf

[48] S. A. Ahson and M. Ilyas, VoIP handbook: applications, technologies,reliability, and security. CRC Press, 2008.

[49] ITU. (2015-06-29) G.107 : The E-model: a computational model foruse in transmission planning. [Online]. Available: https://www.itu.int/rec/T-REC-G.107-201506-I/en

[50] A. Clark, G. Hunt, and M. Kastner, “RTCP XR Report Block for QoEMetrics Reporting,” 2011.

[51] A. D. Clark, P. D. F. Iee et al., “Modeling the effects of burst packetloss and recency on subjective voice quality,” 2001.

[52] B. Belmudez, Audiovisual Quality Assessment and Prediction forVideotelephony, ser. T-Labs Series in Telecommunication Services.Springer International Publishing, 2014. [Online]. Available: https://books.google.co.uk/books?id=ULTzBQAAQBAJ

[53] I. GSMA, “IMS Profile for Voice and SMS,” 92.[54] China Mobile, “VoLTE White Paper,” 2013.[55] V. Spirent, “Deployment and the Radio Access Network,” The LTE User

Equipment Perspective, White Paper, 2012.