Upload
others
View
25
Download
1
Embed Size (px)
Citation preview
PCI Conflict and RSI Collision Detection in LTE NetworksUsing Supervised Learning Techniques
Rodrigo Miguel Martins Diz Miranda Veríssimo
Thesis to obtain the Master of Science Degree in:
Electrical and Computer Engineering
Supervisor(s): Doctor António José Castelo Branco RodriguesDoctor Maria Paula dos Santos Queluz RodriguesDoctor Pedro Manuel de Almeida Carvalho Vieira
Examination Committee
Chairperson: Doctor José Eduardo Charters Ribeiro da Cunha SanguinoSupervisor: Doctor Pedro Manuel de Almeida Carvalho Vieira
Member of the Committee: Doctor Pedro Joaquim Amaro Sebastião
November 2017
ii
Acknowledgments
First of all, I would like to thank my supervisor, Professor António Rodrigues, and my co-supervisors, Pro-
fessor Pedro Vieira and Professor Maria Paula Queluz, for all the support and insights given throughout
the Thesis. I would also like to thank CELFINET for the unique opportunity to work in a great environ-
ment while doing this project, specially Eng. João Ferraz, who helped me understand the discussed
network conflicts and the database structure. Additionally, I would like to express my gratitude to Eng.
Luzia Carias for helping me in the data gathering process, and also to Eng. Marco Sousa for discussing
ideas related to Data Science and Machine Learning.
I would like to thank the instructors from the Lisbon Data Science Starters Academy for their discus-
sions and guidance related to this Thesis and Data Science in general, namely Eng. Pedro Fonseca,
Eng. Sam Hopkins, Eng. Hugo Lopes and João Ascensão.
To all my friends and colleagues that helped me through these last 5 years in Técnico, by studying
and collaborating in course projects, or by just being great people to be with. Namely, André Rabaça,
Bernardo Gomes, Diogo Arreda, Diogo Marques, Eric Herji, Filipe Fernandes, Francisco Franco, Fran-
cisco Lopes, Gonçalo Vilela, João Escusa, João Ramos, Jorge Atabão, José Dias, Luı́s Fonseca, Miguel
Santos, Nuno Mendes, Paul Schydlo, Rúben Borralho, Rúben Tadeia, Rodrigo Zenha and Tomás Alves.
iii
iv
Abstract
Nowadays, mobile networks are rapidly changing, which makes it difficult to maintain good and clean
Physical Cell Identity (PCI) and Root Sequence Index (RSI) plans. These are essential for the Quality
of Service (QoS) and mobility of Long Term Evolution (LTE) mobile networks, since bad PCI and RSI
plans can introduce wireless network problems such as failed handovers, service drops and failed ser-
vice establishments and re-establishments. Thereupon, it is possible in theory to identify PCI and RSI
conflicting cells through the analysis of relevant Key Performance Indicators (KPI) to both problems. To
do so, each cell must be labeled in accordance to configured cell relations. Machine Learning (ML)
classification can then be applied in these conditions.
This thesis aims to present ML approaches to classify time series data from mobile network KPIs,
detect the most relevant KPIs to PCI and RSI conflicts, construct ML models to classify PCI and RSI
conflicting cells with a minimum False Positive (FP) rate and near real time performance, as well as
their test results. To achieve these goals, three hypotheses were tested in order to obtain the best
performing ML models. Furthermore, bias was reduced by testing five different classification algorithms,
namely Adaptive Boosting (AB), Gradient Boost (GB), Extremely Randomized Trees (ERT), Random
Forest (RF) and Support Vector Machines (SVM). The obtained models were evaluated in accordance
to their average Precision and peak Precision metrics. Lastly, the used data was obtained from a real
LTE network.
The best performing models were obtained by using each KPI measurement as an individual fea-
ture. The highest average Precision obtained for PCI confusion detection was 31% and 26% for the 800
MHz and 1800 MHz frequency bands, respectively. No conclusions were taken concerning PCI collision
detection, due to the marginally low number of 6 PCI collisions in the dataset. The highest average Pre-
cision obtained for RSI collision detection was 61% and 60% for the 800 MHz and 1800 MHz frequency
bands, respectively.
Keywords: Wireless Communications, LTE, Machine Learning. Classification, PCI Conflict,RSI Collision.
v
vi
Resumo
Atualmente, as redes móveis estão a ser modificadas rapidamente, o que dificulta a manutenção de
bons planos de Physical Cell Identity (PCI) e de Root Sequence Index (RSI). Estes dois parâmetros são
essenciais para uma boa Qualidade de Serviço (QoS) e mobilidade de redes móveis Long Term Evolu-
tion (LTE), pois maus planos de PCI e de RSI poderão levar a problemas de redes móveis, tais como
falhas de handovers, de estabelecimento e de restabelecimento de serviços, e quedas de serviços.
Como tal, é possı́vel, em teoria, identificar conflitos de PCI e colisões de RSI através da análise de
Key Performance Indicators (KPI) relevantes a cada problema. Para tal, cada célula LTE necessita de
ser identificada como conflituosa ou não conflituosa de acordo com as relações de vizinhança. Nestas
condições, é possı́vel aplicar algoritmos de classificação de Aprendizagem Automática (ML).
Esta Tese pretende apresentar abordagens de ML para classificação de séries temporais prove-
nientes de KPIs de redes móveis, obter os KPIs mais relevantes para a deteção de conflitos de PCI
e de RSI, construir modelos de ML com um número mı́nimo de Falsos Positivos (FP) e desempenho
em quase tempo real. Para alcançar estes objetivos, foram testadas três hipóteses de modo a obter
os modelos de ML com melhor desempenho. Foram testados cinco algoritmos de classificação distin-
tos, nomeadamente Adaptive Boosting (AB), Gradient Boost (GB), Extremely Randomized Trees (ERT),
Random Forest (RF) e Support Vector Machines (SVM). Os modelos obtidos foram avaliados de acordo
com as Precisões médias e picos de Precisão. Por último, os dados foram obtidos de uma rede LTE
real.
Os melhores modelos foram obtidos ao utilizar cada medição de KPI como uma variável individual.
A maior Precisão média obtida para confusões de PCI foi de 31% e de 26% para as bandas de 800 MHz
a de 1800 MHz, respetivamente. Devido ao número bastante baixo de seis colisões de PCI presentes
nos dados obtidos, não foi possı́vel retirar nenhuma conclusão relativamente à sua deteção. A maior
Precisão média obtida para colisões de RSI foi de 61% e de 60% para as bandas de 800 MHz e de
1800 MHz, respetivamente.
Palavras Chave: Comunicações Móveis, LTE, Aprendizagem Automática, Classificação,Conflito de PCI, Colisão de RSI.
vii
viii
Contents
Acknowledgments iii
Abstract v
Resumo vii
List of Figures xiv
List of Tables xv
List of Symbols xviii
Acronyms xxiii
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 LTE Background 3
2.1 Introduction to LTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 LTE Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2.1 Core Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2.2 Radio Access Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Multiple Access Techniques Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.1 OFDMA Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 SC-FDMA Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.3 MIMO Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Physical Layer Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.1 Transport Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.2 Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.3 Downlink User Data Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
ix
2.4.4 Uplink User Data Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.1 Idle Mode Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.5.2 Intra-LTE Handovers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.3 Inter-system Handovers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6 Performance Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6.1 Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6.2 Key Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.3 Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Machine Learning Background 27
3.1 Machine Learning Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Machine Learning Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.3 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Underfitting and Overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.6 Feature Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.7 More Data and Cleverer Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.8 Classification in Multivariate Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.9 Proposed Classification Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.9.1 Adaptive Boosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.9.2 Gradient Boost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.9.3 Extremely Randomized Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.9.4 Random Forest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.9.5 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.10 Classification Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 Physical Cell Identity Conflict Detection 47
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Key Performance Indicator (KPI) Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3 Network Vendor Feature Based Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4 Global Cell Neighbor Relations Based Detection . . . . . . . . . . . . . . . . . . . . . . . 52
4.4.1 Data Cleaning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.4.2 Classification Based on Peak Traffic Data . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.3 Classification Based on Feature Extraction . . . . . . . . . . . . . . . . . . . . . . 61
4.4.4 Classification Based on Raw Cell Data . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.5 Preliminary Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
x
5 Root Sequence Index Collision Detection 71
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Key Performance Indicator Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Global Cell Neighbor Relations Based Detection . . . . . . . . . . . . . . . . . . . . . . . 74
5.3.1 Data Cleaning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.3.2 Peak Traffic Data Based Classification . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.3.3 Feature Extraction Based Classification . . . . . . . . . . . . . . . . . . . . . . . . 81
5.3.4 Raw Cell Data Based Classification . . . . . . . . . . . . . . . . . . . . . . . . . . 83
5.4 Preliminary Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6 Conclusions 87
6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
A PCI and RSI Conflict Detection 91
Bibliography 97
xi
xii
List of Figures
2.1 The EPS network elements (adapted from [6]). . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Overall E-UTRAN architecture (adapted from [6]). . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Frequency-domain view of the LTE multiple-access technologies (adapted from [6]). . . . 7
2.4 MIMO principle with two-by-two antenna configuration (adapted from [4]). . . . . . . . . . 8
2.5 Preserving orthogonality between sub-carriers (adapted from [5]). . . . . . . . . . . . . . 8
2.6 OFDMA transmitter and receiver (adapted from [4]). . . . . . . . . . . . . . . . . . . . . . 10
2.7 SC-FDMA transmitter and receiver with frequency domain signal generation (adapted
from [4]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.8 OFDMA reference symbols to support two eNB transmit antennas (adapted from [4]). . . 12
2.9 LTE modulation constellations (adapted from [4]). . . . . . . . . . . . . . . . . . . . . . . . 14
2.10 Downlink resource allocation at eNB (adapted from [4]). . . . . . . . . . . . . . . . . . . . 14
2.11 Uplink resource allocation controlled by eNB scheduler (adapted from [4]). . . . . . . . . . 17
2.12 Data rate between TTIs in the uplink direction (adapted from [4]). . . . . . . . . . . . . . . 17
2.13 Intra-frequency handover procedure (adapted from [4]). . . . . . . . . . . . . . . . . . . . 20
2.14 Automatic intra-frequency neighbor identification (adapted from [4]). . . . . . . . . . . . . 21
2.15 Overview of the inter-RAT handover from E-UTRAN to UTRAN/GERAN (adapted from [4]). 22
3.1 Procedure of three-fold cross-validation (adapted from [32]). . . . . . . . . . . . . . . . . . 30
3.2 Bias and variance in dart-throwing (adapted from [18]). . . . . . . . . . . . . . . . . . . . . 31
3.3 Bias and variance contributing to total error. . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 A learning curve showing the model accuracy on test examples as function of the number
of training examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Example of a Decision Tree to decide whether a football match should be played based
on the weather (adapted from [45]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.6 Left: The training and test percent error rates using boosting on an Optical Character
Recognition dataset that do not show any signs of overfitting [25]. Right: The training
and test percent error rates on a heart-disease dataset that after five iterations reveal
overfitting [25]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.7 A general tree ensemble algorithm classification procedure. . . . . . . . . . . . . . . . . . 39
3.8 Data mapping from the input space (left) to a high-dimensional feature space (right) to
obtain a linear separation (adapted from [21]). . . . . . . . . . . . . . . . . . . . . . . . . . 42
xiii
3.9 The hyperplane constructed by SVMs that maximizes the margin (adapted from [21]). . . 42
4.1 PCI Confusion (left) and PCI Collision (right). . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Time series analysis of KPI values regarding 4200 LTE cells over a single day. . . . . . . 50
4.3 Boxplots of total null value count for each cell per day for three KPIs. . . . . . . . . . . . . 54
4.4 Absolute Pearson correlation heatmap of peak traffic KPI values and the PCI conflict
detection label. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 Smoothed Precision-Recall curves for peak traffic PCI confusion detection. . . . . . . . . 59
4.6 Learning curves for peak traffic PCI confusion detection. . . . . . . . . . . . . . . . . . . . 60
4.7 The CPVE for PCI confusion detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.8 Smoothed Precision-Recall curves for statistical data based PCI confusion detection. . . 63
4.9 Learning curves for statistical data based PCI confusion detection. . . . . . . . . . . . . . 64
4.10 The CPVE for PCI collision detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.11 The CPVE for PCI confusion detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.12 Smoothed Precision-Recall curves for raw cell data based PCI confusion detection. . . . 67
4.13 Learning curves for raw cell data PCI confusion detection. . . . . . . . . . . . . . . . . . . 68
4.14 Precision-Recall curves for raw cell data PCI collision detection. . . . . . . . . . . . . . . 68
5.1 Time series analysis of KPI values regarding 23500 LTE cells over a single day. . . . . . . 74
5.2 Boxplots of total null value count for each cell per day for two KPIs. . . . . . . . . . . . . . 76
5.3 Absolute Pearson correlation heatmap of peak traffic KPI values and the RSI collision
detection label. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.4 Smoothed Precision-Recall curves for peak traffic RSI collision detection. . . . . . . . . . 79
5.5 Learning curves for peak traffic RSI collision detection. . . . . . . . . . . . . . . . . . . . . 80
5.6 The CPVE for RSI collision detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.7 Smoothed Precision-Recall curves for statistical data based RSI collision detection. . . . 82
5.8 Learning curves for statistical data based RSI collision detection. . . . . . . . . . . . . . . 83
5.9 The CPVE for RSI collision detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.10 Smoothed Precision-Recall curves for raw cell data RSI collision detection. . . . . . . . . 85
5.11 Learning curves for raw cell data RSI collision detection. . . . . . . . . . . . . . . . . . . . 86
A.1 PCI and RSI Conflict Detection Flowchart. . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
xiv
List of Tables
2.1 Downlink peak data rates [5]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Uplink peak data rates [4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Differences between both mobility modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Description of the KPI categories and KPI examples. . . . . . . . . . . . . . . . . . . . . . 24
2.5 Netherlands P3 KPI analysis done in 2016 [16]. . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1 The three components of learning algorithms (adapted from [18]). . . . . . . . . . . . . . 29
3.2 Confusion Matrix (adapted from [31]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1 Chosen Accessibility and Integrity KPIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Chosen Mobility, Quality and Retainability KPIs. . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3 The obtained cumulative Confusion Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4 The obtained Model Evaluation metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 Resulting dataset composition subsequent to data cleaning. . . . . . . . . . . . . . . . . . 55
4.6 Average importance given to each KPI by each Decision Tree based classifier. . . . . . . 57
4.7 Peak traffic PCI Confusion classification results. . . . . . . . . . . . . . . . . . . . . . . . 58
4.8 PCI Confusion classification training and testing times in seconds. . . . . . . . . . . . . . 60
4.9 Statistical data based PCI confusion classification results. . . . . . . . . . . . . . . . . . . 62
4.10 Statistical data based PCI confusion classification training and testing times in seconds. . 64
4.11 Raw cell data PCI confusion classification results. . . . . . . . . . . . . . . . . . . . . . . 66
4.12 Raw cell data PCI confusion classification training and testing times in seconds. . . . . . 67
5.1 Chosen Accessibility and Mobility KPIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.2 Chosen Quality and Retainability KPIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3 Average importance given to each KPI by each Decision Tree based classifier. . . . . . . 78
5.4 Peak traffic RSI collision classification results. . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.5 RSI collision classification training and testing times in seconds. . . . . . . . . . . . . . . 80
5.6 Statistical data based RSI collision classification results. . . . . . . . . . . . . . . . . . . . 81
5.7 RSI collision classification training and testing times in seconds. . . . . . . . . . . . . . . 82
5.8 Raw cell data RSI collision classification results. . . . . . . . . . . . . . . . . . . . . . . . 84
5.9 RSI collision classification training and testing times in seconds. . . . . . . . . . . . . . . 85
xv
xvi
List of Symbols
Srxlevel Rx level value of a cell.
Qrxlevelmeas Reference Signal Received Power from a cell.
Qrxlevmin Minimum required level for cell camping.
Qrxlevelminoffset Offset used when searching for a Public Land Mobile Network of preferred network operators.
SServingCell Rx value of the serving cell.
Sintrasearch Rx level threshold for the User Equipment to start making intra-frequency measurements.
Snonintrasearch Rx level threshold for the User Equipment to start making inter-system measurements.
Qmeas Reference Signal Received Power measurement for cell re-selection.
Qhyst Power domain hysteresis in order to avoid the ping-pong fenomena between cells.
Qoffset Offset control parameter to deal with different frequencies and cell characteristics.
Treselection Time limit to perform cell re-selection.
Threshhigh Higher threshold for a User Equipment to camp on a higher priority layer.
Threshlow Lower threshold for a User Equipment to camp on a low priority layer.
x Input vector for a Machine Learning model.
y Output vector that a Machine Learning model aims to predict.
ŷ Output vector that a Machine Learning model predicts.
σ2ab Covariance matrix of variable vectors a and b.
λ Eigenvalue of a Principal Component.
Wt Weight array of t iterations.
θt Parameters of a classification algorithm of t iterations.
αt Weight of a hypothesis of t iterations.
Zt Normalization factor of t iterations.
H Machine Learning model.
f Functional dependence between input and output vectors.
f̂ Estimated functional dependence.
ψ Loss function.
gt Negative gradient of a loss function of t iterations.
Ey Expected prediction loss.
ρt Gradient step size of t iterations.
K Number of randomly selected features.
xvii
nmin Minimum sample size for splitting a Decision Tree node.
M Total number of Decision Trees to grow in an ensemble.
S Data subset.
fSmax Maximal value of a variable vector in a data subset S.
fSmin Minimal value of a variable vector in a data subset S.
fc Random cut-point of a variable vector.
Optimization problem for Support Vector Machines.
C Positive regularization constant for Support Vector Machines.
ξ Slack variable that states whether a data sample is on the correct side of a hyperplane.
α Lagrange multiplier.
#SV Number of Support Vectors.
K(·, ·) Support Vector Machines kernel function.
σ Free parameter.
γ Positive regularization constant for Support Vector Machines.
β Weight constant for defining importance for either Precision or Recall metrics.
Q1 First quartile.
Q3 Third quartile.
Nrows Number of sequences needed to generate the 64 Random Access Channel preambles.
xviii
Acronyms
1NN One Nearest Neighbor
3GPP Third Generation Partnership Project
4G Fourth Generation
AB Adaptive Boosting
AuC Authentication Centre
BCH Broadcast Channel
BPSK Binary Phase Shift Keying
CM Configuration Management
CNN Convolutional Neural Network
CQI Channel Quality Indicator
CPVE Cumulative Proportion of Variance Explained
CRC Cyclic Redundancy Check
CS Circuit-Switched
DFT Discrete Fourier Transform
DL-SCH Downlink Shared Channel
EDGE Enhanced Data for Global Evolution
eNB Evolved Node B
EPC Evolved Packet Core
EPS Evolved Packet System
E-SMLC Evolved Serving Mobile Location Centre
ERT Extremely Randomized Tree
E-UTRA Evolved UMTS Terrestrial Radio Access
xix
E-UTRAN Evolved UMTS Terrestrial Radio Access Network
FDMA Frequency Division Multiple Access
FFT Fast Fourier Transform
FN False Negative
FP False Positive
FTP File Transfer Protocol
GB Gradient Boost
GERAN GSM EDGE Radio Access Network
GMLC Gateway Mobile Location Centre
GPRS General Packet Radio Service
GSM Global System for Mobile Communications
GTP GPRS Tunneling Protocol
GW Gateway
HARQ Hybrid Adaptive Repeat and Request
HSPA High Speed Packet Access
HSDPA High Speed Downlink Packet Access
HSS Home Subscriber Server
HSUPA High Speed Uplink Packet Access
ID Identity
IDFT Inverse Discrete Fourier Transform
IEEE Institute of Electrical and Electronics Engineers
IFFT Inverse Fast Fourier Transform
IP Internet Protocol
IQR Interquartile Range
ITU International Telecommunication Union
kNN k-Nearest Neighbor
KPI Key Performance Indicators
LCS LoCation Services
xx
LSTM Long Short Term Memory
LTE Long Term Evolution
MAC Medium Access Control
MCH Multicast Channel
ME Mobile Equipment
MIB Master Information Block
MIMO Multiple-Input Multiple-Output
ML Machine Learning
MME Mobility Management Entity
MNO Mobile Network Operators
MT Mobile Termination
NaN Not a Number
NE Network Element
NR Network Resource
OAM Operations, Administration and Management
OFDM Orthogonal Frequency Division Multiplexing
OFDMA Orthogonal Frequency Division Multiple Access
OS Operations System
PAPR Peak-to-Average Power Ratio
PAR Peak-to-Average Ratio
PBCH Physical Broadcast Channel
PC Principal Component
PCA Principal Component Analysis
PCCC Parallel Concatenated Convolution Coding
PCH Paging Channel
PCI Physical Cell Identity
PCRF Policy Control and Charging Rules Function
PDCCH Physical Downlink Control Channel
xxi
PDN Packet Data Network
PDSCH Physical Downlink Shared Channel
PLMN Public Land Mobile Network
PM Performance Management
PMCH Physical Multicast Channel
PRACH Physical Random Access Channel
PRB Physical Resource Block
P-GW Packet Data Network Gateway
PR Precision-Recall
PS Packet-Switched
PS HO Packet-Switched Handover
PUCCH Physical Uplink Control Channel
PUSCH Physical Uplink Shared Channel
QAM Quadrature Amplitude Modulation
QoS Quality of Service
QPSK Quadrature Phase Shift Keying
RACH Random Access Channel
RAT Radio Access Technology
RBF Radial Basis Function
RBS Radio Base Station
RF Random Forest
RLC Radio Link Control
ROC Receiver Operator Characteristic
RRC Radio Resource Control
RSI Root Sequence Index
RSRP Reference Signal Received Power
RSRQ Reference Signal Received Quality
RSSI Received Signal Strength Indicator
xxii
SAE System Architecture Evolution
SAE GW SAE Gateway
SC-FDMA Single-Carrier Frequency Division Multiple Access
SDU Service Data Unit
S-GW Serving Gateway
SIB System Information Block
SIM Subscriber Identity Module
SNMP Simple Network Management Protocol
SNR Signal-to-Noise Ratio
SON Self-Organizing Network
SQL Structured Query Language
SVM Support Vector Machines
TDMA Time Division Multiple Access
TE Terminal Equipment
TMN Telecommunication Management Network
TN True Negative
TP True Positive
TTI Transmission Time Interval
UE User Equipment
UICC Universal Integrated Circuit Card
UL-SCH Uplink Shared Channel
UMTS Universal Mobile Telecommunications System
URSI International Union of Radio Science
USIM Universal Subscriber Identity Module
UTRAN UMTS Terrestrial Radio Access Network
V-MIMO Virtual Multiple-Input Multiple-Output
VoIP Voice over IP
WCDMA Wideband Code Division Multiple Access
WCNC Wireless Communications and Networking Conference
xxiii
xxiv
Chapter 1
Introduction
This chapter aims to deliver an overview of the presented work. It includes the context and motivation
that led to the development of this work, as well as its objectives and overall structure.
1.1 Motivation
Two of the major concerns of Mobile Network Operators (MNO) are to optimize and to maintain network
performance. However, maintaining performance has proved the be challenging mainly for large and
complex networks. In the long term, changes made in the networks may increase the number of internal
conflicts and inconsistencies. These modifications include changing the antenna tilting, changing the
cell’s power or even changes that cannot be controlled by the MNOs, such as user mobility and radio
channel fading.
In order to assess the network performance, quantifiable performance metrics, known as Key Perfor-
mance Indicators (KPI), are typically used. KPIs can report network performance such as the handover
success rate and the channel interference averages of each cell, and are calculated periodically, result-
ing in time series.
In order to automatically detect the network fault causes, some work has been done by using KPI
measurements with unsupervised techniques, as in [1]. This thesis focuses on applying supervised
techniques for two known Long Term Evolution (LTE) network conflicts, namely Physical Cell Identity
(PCI) conflicts and Root Sequence Index (RSI) collisions.
1.2 Objectives
This thesis aims to create Machine Learning (ML) models that can correctly classify PCI conflicts and
RSI collisions with a minimum False Positive (FP) rate and with a near real time performance. To achieve
this goal, three hypotheses to obtain the best models were tested:
1. PCI conflicts and/or RSI collisions are better detected by using KPI measurements in the daily
peak traffic instant of each cell;
1
2. PCI conflicts and/or RSI collisions are better detected by extracting statistical calculations from
each KPI daily time series and using them as features;
3. PCI conflicts and/or RSI collisions are better detected by using each cell’s KPI measurements in
each day as an individual feature.
These three hypotheses were tested by taking into account the average Precisions and the peak
Precisions obtained from testing the models, as well as their training and testing durations. In order to
reduce bias from this study, five different classification algorithms were set, namely Adaptive Boosting
(AB), Gradient Boost (GB), Extremely Randomized Tree (ERT), Random Forest (RF) and Support Vector
Machines (SVM). The aim of the classifiers was to classify cells as either nonconflicting or conflicting,
depending on the detection use case. The used classification algorithm implementations were obtained
from the Python Scikit-Learn library [2].
1.3 Structure
This work is divided into four main chapters. Chapter 2 presents a technical background of LTE and
Chapter 3 addresses ML concepts as well as more specific ones, such as how time series can be
classified to reach the thesis’ objectives and a technical overview of the proposed classification algo-
rithms. These two aforementioned chapters deliver the necessary background to understand the work
in Chapters 4 and 5.
Chapter 4 introduces the LTE PCI network parameter, how PCI conflicts can occur, perform hypoth-
esis testing and present the respective hypotheses’ results. Additionally, it includes sections focused on
data cleaning, KPI selection and preliminary conclusions. Chapter 5 has the same structure as Chapter
4, but it is focused on RSI collisions.
Finally, in Chapter 6, conclusions are drawn and future work is suggested.
1.4 Publications
Two scientific papers were written in the context of this Thesis, namely:
• ”PCI and RSI Conflict Detection in a Real LTE Network Using Supervised Techniques” written by
R. Verı́ssimo, P. Vieira, M. P. Queluz and A. Rodrigues. This paper was submitted to the 2018
Institute of Electrical and Electronics Engineers (IEEE) Wireless Communications and Networking
Conference (WCNC), Barcelona, Spain 15th-18th April 2018.
• ”Deteção de Conflitos de PCI e de RSI Numa Rede Real LTE Utilizando Aprendizagem Au-
tomática” written by R. Verı́ssimo, P. Vieira, M. P. Queluz and A. Rodrigues. This paper was
submitted to the 11th International Union of Radio Science (URSI) Congress, Lisbon, Portugal
24th November 2017.
2
Chapter 2
LTE Background
This chapter provides an overview of the LTE standard [3], aiming for a better understanding of the
work that will be developed under the Thesis scope. Section 2.1 presents a brief introduction to LTE and
Section 2.2 delivers an architectural overview of this system. Section 2.3 presents a succinct overview of
the multiple access techniques that are used in LTE. The physical layer design is introduced in Section
2.4. Section 2.5 addresses how mobility is handled in LTE. Finally, Section 2.6 describes how data
originated from telecommunication networks is typically collected and evaluated.
The content of this chapter is mainly based on the following references: [4, 5] in Section 2.1; [6, 7] in
Section 2.2; [6, 4, 5] in Section 2.3; [4, 5] in Section 2.4; [4] in Section 2.5; [8, 9] in Section 2.6.
2.1 Introduction to LTE
LTE is a Fourth Generation (4G) wireless communication standard developed by the Third Generation
Partnership Project (3GPP); it resulted from the development of a packet-only wideband radio system
with flat architecture, and was specified for the first time in the 3GPP Release 8 document series.
The downlink in LTE uses Orthogonal Frequency Division Multiple Access (OFDMA) as its multiple
access scheme and the uplink uses Single-Carrier Frequency Division Multiple Access (SC-FDMA).
Both of these solutions result in orthogonality between the users, diminishing the interference and en-
hancing the network capacity. The resource allocation in both uplink and downlink is done in the fre-
quency domain, with a resolution of 180 kHz and consisting in twelve sub-carriers of 15 kHz each. The
high capacity of LTE is due to its packet scheduling being carried out in the frequency domain. The
main difference between the resource allocation on the uplink and on the downlink is that the former
is continuous, in order to enable single carrier transmission, whereas the latter can freely use resource
blocks from different parts of the spectrum. Resource blocks are frequency and time resources that
occupy 12 subcarriers of 15 kHz each and one time slot of 0.5 ms. By adopting the uplink single carrier
solution, LTE enables efficient terminal power amplifier design, which is essential for the terminal battery
life. Depending on the available spectrum, LTE allows spectrum flexibility that can range from 1.4 MHz
up to 20 MHz. In ideal conditions, the 20 MHz bandwidth can provide up to 172.8 Mbps downlink user
3
data rate with 2x2 Multiple-Input Multiple-Output (MIMO) and 340 Mbps with 4x4 MIMO; the uplink peak
data rate is 86.4 Mbps.
2.2 LTE Architecture
In contrast to the Circuit-Switched (CS) model of previous cellular systems, LTE is designed to only
support Packet-Switched (PS) services, aiming to provide seamless Internet Protocol (IP) connectivity
between the User Equipment (UE) and the Packet Data Network (PDN), without disrupting the end users’
applications during mobility. LTE corresponds to the evolution of radio access through the Evolved UMTS
Terrestrial Radio Access Network (E-UTRAN) alongside an evolution of the non-radio aspects, named
as System Architecture Evolution (SAE), which includes the Evolved Packet Core (EPC) network. The
combination of LTE and SAE forms the Evolved Packet System (EPS), which provides the user with IP
connectivity to a PDN for accessing the Internet, as well as running different services simultaneously,
such as File Transfer Protocol (FTP) and Voice over IP (VoIP).
The features offered by LTE are supported through several EPS network elements with different roles.
Figure 2.1 shows the global network architecture that encompasses both the network elements and the
standardized interfaces. The network comprises of the core network (i.e. EPC) and the access network
(i.e. E-UTRAN). The access network consists of one node, the Evolved Node B (eNB), which connects
to the UEs. The network elements are inter-connected through interfaces that are standardized in order
to allow multivendor interoperability.
UE eNBServing
Gateway
MME
E-SMLC GMLC
HSS
PDN
Gateway
PCRF
Operator s
IP services
LTE-Uu S1-U S5/S8 SGi
RxGx
S6a
SLgSLs
S1-MME
S11
Figure 2.1: The EPS network elements (adapted from [6]).
The UE is the interface through which the subscriber is able to communicate with the E-UTRAN; it is
composed by the Mobile Equipment (ME) and by the Universal Integrated Circuit Card (UICC). The ME
is essentially the radio equipment that is used to communicate; it can also be divided into both Mobile
Termination (MT) — which conducts all the communication functions — and Terminal Equipment (TE)
— that terminates the streams of data. The UICC is a smart card, informally known as the Subscriber
Identity Module (SIM) card; it runs the Universal Subscriber Identity Module (USIM), which is an appli-
cation that stores user-specific data (e.g. phone number and home network identity). Additionally, it also
employs security procedures through the security keys that are stored in the UICC.
4
2.2.1 Core Network Architecture
The EPC corresponds to the core network and its role is to control the UE and to establish the bearers
– paths that user traffic uses when passing an LTE transport network. The EPC has as main logical
nodes, the Mobility Management Entity (MME), the Packet Data Network Gateway (P-GW), the Serving
Gateway (S-GW) and the Evolved Serving Mobile Location Centre (E-SMLC). Furthermore, there are
other logical nodes that also belong to the EPC such as the Home Subscriber Server (HSS), the Gateway
Mobile Location Centre (GMLC) and the Policy Control and Charging Rules Function (PCRF). These
logical nodes are described in the following points:
• MME is the main control node in the EPC. It manages user mobility in the corresponding service
area through tracking, and also manages the user subscription profile and service connectivity by
cooperating with the HSS. Moreover, it is the sole responsible for security and authentication of
users in the network.
• P-GW is the node that interconnects the EPS with the PDNs. It acts as an IP attachment point
and allocates the IP addresses for the UE. Yet, this allocation can also be performed by a PDN
where the P-GW tunnels traffic between the UE and the PDN. More so, it handles traffic gating
and filtering functions required for the services being used.
• S-GW is a network element that not only links user plane traffic between the eNB and the P-GW,
but also retains information about the bearers when the UE is in idle state.
• E-SMLC has the responsibility to manage both the scheduling and to coordinate the resources
necessary to locate the UE. Furthermore, it estimates the UE speed and corresponding accuracy
through the final location that it assesses.
• HSS is a central database that holds information regarding all the network operator’s subscribers
such as their Quality of Service (QoS) profile and any access restrictions for roaming. It not only
holds information about the PDNs to which the user is able to connect, but also stores dynamic
information (e.g. the identity of the MME to which the user is currently attached or registered). Ad-
ditionally, the HSS is also allowed to integrate the Authentication Centre (AuC) which is responsible
to generate the vectors used for both authentication and security keys.
• GMLC incorporates the fundamental functionalities to support LoCation Services (LCS). After
being authorized, it sends positioning requests to the MME and collects the final location estimates.
• PCRF is responsible for managing the users’ QoS and data charges. The PCRF is connected to
the P-GW and sends information to it for enforcement.
2.2.2 Radio Access Network Architecture
The E-UTRAN represents the radio component of the architecture. It is responsible to connect the UEs
to the EPC and subsequently connects UEs between themselves and also to PDNs (e.g. the Internet).
5
Composed solely of eNBs, the E-UTRAN is a mesh of interconnected eNBs through X2 interfaces
(that can be either physical or logical links). These nodes are intelligent radio base stations that cover
one or more cells and that are also capable of handling all the radio related protocols (e.g. handover).
Unlike in Universal Mobile Telecommunications System (UMTS), there is no centralized controller in
E-UTRAN for normal user traffic and hence its architecture is flat, which can be observed in Figure 2.2.
Figure 2.2: Overall E-UTRAN architecture (adapted from [6]).
The eNB has two main responsibilities: firstly, it sends radio transmissions to all its mobile devices
on the downlink and also receives transmissions from them on the uplink; secondly, it controls the low-
level operation of all its mobile devices through signalling messages (e.g. handover commands) that are
related to those same radio transmissions. The eNBs are normally connected with each other through an
interface called X2 and also to the EPC through the S1 interface. Additionally, the eNBs are connected
to the MME by means of the S1-MME interface and also to the S-GW through the S1-U interface.
The key functions of E-UTRAN can be summarized as:
• managing the radio link’s resources and controlling the radio bearers;
• compressing the IP headers;
• encrypting all data sent over the radio interface;
• routing user traffic towards the S-GW and delivering user traffic from the S-GW to the UE;
• providing the required measurements and additional data to the E-SMLC in order the find the UE
position;
• handling handover between connected eNBs through X2 interfaces;
• signalling towards the MME and also the bearer path towards the S-GW.
The eNBs are responsible for all these functions on the network side, where one single eNB can
manage multiple cells. One key differentiation factor from previous generations is that LTE assigns
6
the radio controller function to the eNB. This strategy reduces latency and improves the efficiency of
the network due to the closer interaction between the radio protocols and the radio access network.
There is no need for a centralized data-combining function in the network, as LTE does not support
soft-handovers. The removal of the centralized network requires that, as the UE moves, the network
transfers all information related to the UE towards another eNB.
The S1 interface has an important feature that allows for a link between the access network and
the core network (i.e. S1-flex). This means that multiple core network nodes can serve a common
geographical area, being connected by a mesh network to the set of eNBs in that area. Thus, an eNB
can be served by multiple MME/S-GWs, as happens for the eNB#2 in Figure 2.2. This allows UEs in
the network to be shared between multiple core network nodes through an eNB, and hence eliminating
single points of failure for the core network nodes and also allowing for load sharing.
2.3 Multiple Access Techniques Overview
In order to fulfil all the requirements defined for LTE, advances were made to the underlying mobile radio
technology. More specifically, to both the multicarrier and multiple-antenna technology.
The first major design choice in LTE was to adopt a multicarrier approach. Regarding the downlink,
the nominated schemes were OFDMA and Multiple Wideband Code Division Multiple Access (WCDMA),
with OFDMA being the selected one. Concerning the uplink, the suggested schemes were SC-FDMA,
OFDMA and Multiple WCDMA, resulting in the selection of SC-FDMA. Both of these selected schemes
presented the frequency domain as a new dimension of flexibility that introduced a potent new way to
improve not only the system’s spectral efficiency, but also to minimize both the fading problems and
inter-symbol interference. These two selected schemes are represented in Figure 2.3.
Figure 2.3: Frequency-domain view of the LTE multiple-access technologies (adapted from [6]).
Before delving into the basics of both OFDMA and SC-FDMA, it is important to present some basic
concepts first:
• for single carrier transmission in LTE, a single carrier is modulated in phase and/or amplitude. The
spectrum wave form is a filtered single carrier spectrum that is centered on the carrier frequency.
• in a digital system, the higher the data rate, the higher the symbol rate and thereupon the larger
7
the bandwidth required for the same modulation. In order to carry the desired number of bits per
symbols, the modulation can be changed by the transmitter.
• in a Frequency Division Multiple Access (FDMA) system, the system can be accessed simultane-
ously by different users through the use of different carriers and sub-carriers. In this last system,
it is crucial to avoid excessive interference between carriers without adopting long guard bands
between users.
• in the research for even better spectral efficiencies, multiple antenna technologies were considered
as a way to exploit another new dimension — the spatial domain. As such, the first LTE Release
led to the introduction of the MIMO operation that includes spatial multiplexing and also pre-coding
and transmit diversity. The basic principle of MIMO is presented in Figure 2.4 where different
streams of data are fed to the pre-coding operation and forwarded to signal mapping and OFDMA
signal generation.
Demux
Modulation
Modulation
Layer
Mapping and
Pre-coding
Signal
Mapping &
Generation
Signal
Mapping &
Generation
MIMO
Decoding
Figure 2.4: MIMO principle with two-by-two antenna configuration (adapted from [4]).
2.3.1 OFDMA Basics
OFDMA consists of narrow and mutually orthogonal sub-carriers that are separated typically by 15 kHz
from adjacent sub-carriers, regardless of the total transmission bandwidth. Orthogonality is preserved
between all sub-carriers in every sampling instant of a specific sub-carrier, as all other sub-carriers have
a zero value, which can be observed in Figure 2.5.
Figure 2.5: Preserving orthogonality between sub-carriers (adapted from [5]).
As stated in the beginning of Section 2.3, OFDMA was selected over Multiple WCDMA. The key
characteristics that led to that decision [7, 10, 11] are:
8
• low-complexity receivers even with severe channel conditions;
• robustness to time-dispersive radio channels;
• immunity to selective fading;
• resilience to narrow-band co-channel interference and both inter-symbol and inter-frame interfer-
ence;
• high spectral efficiency;
• efficient implementation with Fast Fourier Transform (FFT).
Meanwhile, OFDMA also presents some challenges, such as [7, 10, 11]:
• higher sensitivity to carrier frequency offset caused by leakage of the Discrete Fourier Transform
(DFT), relatively to single carrier systems;
• high Peak-to-Average Power Ratio (PAPR) of the transmitted signal, which requires high linearity
in the transmitter, resulting in poor power efficiency;
• sensitivity to Doppler shift, that was solved in LTE by choosing a sub-carrier spacing of 15 kHz and
hence providing a relatively large tolerance;
• sensitivity to frequency synchronization problems.
The OFDMA implementation is based on the use of both DFT and Inverse Discrete Fourier Transform
(IDFT) in order to move between time and frequency domain representation. Furthermore, the practical
implementation uses the FFT, which moves the signal from time to frequency domain representation;
the opposite operation is done through the Inverse Fast Fourier Transform (IFFT).
The transmitter used by an OFDMA system contains an IFFT block that acts on each sub-carrier to
convert the signal to the frequency domain. The input of the previous block results from the serial-to-
parallel conversion of the data source. Finally, a cyclic extension is added to the output signal of the IFFT
block, which aims to avoid inter-symbol interference. By contrast, inverse operations are implemented
in the receiver with the addition of an equalisation block between the FFT and the demodulation blocks.
The architecture of the OFDMA transmitter and receiver is presented in Figure 2.6.
The cyclic extension is performed by copying the final part of the symbol to its beginning. This method
is preferable to adding a guard interval because the Orthogonal Frequency Division Multiplexing (OFDM)
signal is periodic. When the symbol is periodic, the impact of the channel corresponds to a multiplication
by a scalar, assuming that the cyclic extension is long enough. Moreover, this periodicity of the signal
allows for a discrete Fourier spectrum, enabling the use of both DFT and IDFT in the receiver and
transmitter respectively.
An important advantage of the use of OFDMA in a base station transmitter is that it can allocate any
of its sub-carriers to users in the frequency domain, allowing the scheduler to benefit from frequency
diversity. Yet, the signalling resolution caused by the resulting overhead prevents the allocation of a
9
ModulatorBitsSerial to
ParallelIFFT
.
.
.
Cyclic
Extension
Transmitter
Remove Cyclic
Extension
Receiver
Serial to
ParallelFFT
.
.
.Equaliser Demodulator
Bits
Total Radio Bandwidth (eg. 20 MHz)
Figure 2.6: OFDMA transmitter and receiver (adapted from [4]).
single sub-carrier, forcing the use of a Physical Resource Block (PRB) consisting of 12 sub-carriers.
As such, the minimum bandwidth that can be allocated is 180 kHz. This allocation in the time-domain
corresponds to 1 ms, also known as Transmission Time Interval (TTI), although each PRB only lasts for
0.5 ms. In LTE, each PRB can be modulated either through Quadrature Phase Shift Keying (QPSK) or
Quadrature Amplitude Modulation (QAM), namely 16-QAM and 64-QAM.
2.3.2 SC-FDMA Basics
Although OFDMA works well on the LTE downlink, it has one drawback: the transmitted signal power
is subjected to large variations. This results in high PAPR, which in turn can cause problems for the
transmitter’s power amplifier. In the downlink, the base station transmitters are large and expensive
devices that can use expensive power amplifiers. The same does not happen in the uplink, where the
mobile transmitter has to be cheap. This makes OFDMA unsuitable for the LTE uplink.
Hence, it was decided to use SC-FDMA for multiple access. Its basic form could be perceived as
equal to the QAM modulation, where each symbol is sent one at a time, similarly to Time Division
Multiple Access (TDMA) systems, such as Global System for Mobile Communications (GSM). The
frequency domain generation of the signal, which can be observed in Figure 2.7, adds the OFDMA
property of good spectral waveform. This eliminates the need for guard bands between different users,
similarly to OFDMA downlink. A cyclic extension is also added periodically to the signal, as happens in
OFDMA with the exception of not being added after each symbol. This is due to the symbol rate being
faster than in OFDMA. The added cyclic extension prevents inter-symbol interference between blocks
of symbols and also simplifies the receiver design. The remaining inter-symbol interference is handled
by running the receiver equalizer in the receiver for a block of symbols, until reaching the cyclic prefix.
While the transmission occupies the whole spectrum allocated to the user in the frequency domain,
the system has a 1 ms resolution allocation. For instance, when the resource allocation is doubled,
so is the data rate, assuming the same level of overhead. Hence, the individual transmission gets
shorter in the time domain, however gets wider in the frequency domain. The allocations do not need
to have frequency domain continuity, but can take any set of continuous allocation of frequency domain
10
ModulatorBitsSub-carrier
MappingIFFT
.
.
.
Cyclic
Extension
Transmitter
Remove Cyclic
Extension
Receiver
FFTMMSE
EqualiserIDFT Demodulator
Bits
DFT
Total Radio Bandwidth (eg. 20 MHz)
Figure 2.7: SC-FDMA transmitter and receiver with frequency domain signal generation (adapted from[4]).
resources. The allowed amount of 180 kHz resource blocks – the minimum resource allocation based on
the 15 kHz sub-carrier spacing of OFDMA downlink – that can be allocated are defined by the practical
signaling constraints. The maximum allocated bandwidth can go up to 20 MHz, but tends to be smaller
as it is required to have a guard band towards the neighboring operator.
As the transmission is only done in the time domain, the system retains its good envelope prop-
erties and the waveform characteristics are highly dependent of the applied modulation method. Thus,
SC-FDMA is able to reach a very low signal Peak-to-Average Ratio (PAR). Moreover, it facilitates efficient
power amplifiers in the devices, saving battery life.
Regarding the base station receiver for SC-FDMA, it is slightly more complex than the OFDMA
receiver. This is even more complex if it needs equalizers that are able to perform as well as OFDMA
receivers. Yet, this disadvantage is far outweighed by the benefits of the uplink range and device battery
life that can be reached with SC-FDMA. Furthermore, by having a dynamic resource usage with a 1 ms
resolution means that there is no base-band receiver per UE on standby and those who do have data
to transmit use the base station in a dynamic fashion. Lastly, the most resource consuming process in
both uplink and downlink receiver chains is the channel decoding with increased data rates.
2.3.3 MIMO Basics
The MIMO operation is one of the fundamental technologies that the first LTE release brought, despite
being included earlier in WCDMA specifications [5]. However, in WCDMA, the MIMO operates differently
from LTE, where a spreading operation is applied.
In the first LTE release, MIMO includes spatial diversity, pre-coding and transmit diversity. Spatial
multiplexing consists in the signal transmission from two or more different antennas with different data
streams, with further separation through signal processing in the receiver. Thus, in theory, a 2-by-2
antenna configuration doubles the peak data rates, or quadruples it if applied with a 4-by-4 antenna
configuration. Pre-coding handles the weighting of the signals transmitted from different antennas, in
order to maximize the received Signal-to-Noise Ratio (SNR). Lastly, transmit diversity is used to exploit
11
the gains from independent fading between different antennas through the transmission of the same
signal from various antennas with some coding.
Figure 2.8: OFDMA reference symbols to support two eNB transmit antennas (adapted from [4]).
In order to allow the separation, at the receiver, of the MIMO streams transmitted by different an-
tennas, reference symbols are assigned to each antenna. This eliminates the possibility of existing
corruption in the channel estimation from another antenna, because each stream sent by each antenna
is unique. This principle can be observed in Figure 2.8 and can be applied by two or more antennas,
having the first LTE Release specified up to four antennas. Furthermore, as the number of antennas
increases, the same happens to the required SNR, to the complexity of the transmitters and receivers
and to the reference symbol overhead.
MIMO can also be used in LTE uplink, despite not being possible to increase the single user data
rate in mobile devices that only have a single antenna. Yet, the cell level maximum data rate can be
doubled through the allocation of two devices with orthogonal reference signals, i.e. Virtual Multiple-
Input Multiple-Output (V-MIMO). Accordingly, the base station handles this transmission as a MIMO
transmission, separating the data streams by means of the MIMO receiver. This operation does not bring
any major implementation complexity on the device perspective as only the reference signal sequence is
altered. On the other hand, additional processing is required from the network side in order to separate
the different users. Lastly, it is also important to mention that SC-FDMA is well compatible with MIMO,
as the users are orthogonal between them inside the same cell and the local SNR may be very high for
the users close to the base station.
2.4 Physical Layer Design
After covering the OFDMA and SC-FDMA principles, it is now possible to describe the physical layer of
LTE. This layer is characterized by the design principle of resource usage based solely on dynamically
allocated shared resources, instead of having dedicated resources reserved for a single user. Further-
more, it has a key role in defining the resulting capacity and thus allows for a comparison between
different systems for expected performance. This section will introduce the transport channels and how
they are mapped to the physical channels, the available modulation methods for both data and control
channels and the uplink/downlink data transmission.
12
2.4.1 Transport Channels
As there is no reservation of dedicated resources for single users, LTE contains only common transport
channels; these channels have the role of connecting the Medium Access Control (MAC) layer to the
physical layer. The physical channels carry the transport channel and it is the processing applied to
those physical channels that characterizes the transport channel. Moreover, the physical layer needs
to provide dynamic resource assignment both for data rate variation and for resource division between
users. The transport channels and their mapping to the physical channels are described in the following
points:
• Broadcast Channel (BCH) is a downlink broadcast channel that is used to broadcast the required
system parameters to enable devices accessing the system.
• Downlink Shared Channel (DL-SCH) carries the user data for point-to-point connections in the
downlink direction. All the information transported in the DL-SCH is intended only for a single user
or UE in the RRC CONNECTED state.
• Paging Channel (PCH) transports the paging information in the downlink direction aimed for the
device in order to move it from a RRC IDLE to a RRC CONNECTED state.
• Multicast Channel (MCH) is used in the downlink direction to carry multicast service content to
the UE.
• Uplink Shared Channel (UL-SCH) transfers both the user data and the control information from
the device in the uplink direction in the RRC CONNECTED state.
• Random Access Channel (RACH) acts in the uplink direction to answer to the paging messages
as well as to initiate the move from or towards the RRC CONNECTED state according to the UE
data transmission needs.
The mentioned RRC IDLE and RRC CONNECTED states are described in Section 2.5.
In the uplink direction, the UL-SCH and RACH are respectively transported by the Physical Uplink
Shared Channel (PUSCH) and Physical Random Access Channel (PRACH).
In the downlink direction, the PCH and the BCH are mapped to the Physical Downlink Shared Chan-
nel (PDSCH) and the Physical Broadcast Channel (PBCH), respectively. Lastly, the DL-SCH is mapped
to the PDSCH and MCH is mapped to the Physical Multicast Channel (PMCH).
2.4.2 Modulation
Both the uplink and downlink directions use the QAM modulator, namely 4-QAM (also known as QPSK),
16-QAM and 64-QAM, whose symbol constellations can be observed in Figure 2.9. The first two are
available in all devices, while the support for 64-QAM in the uplink direction depends upon the UE class.
QPSK modulation is used when operating at full transmission power as it allows for good transmitter
power efficiency. For 16-QAM and 64-QAM modulations, the devices use a lower maximum transmitter
power.
13
QPSK2 bits/symbol
16-QAM4 bits/symbol
64-QAM6 bits/symbol
Figure 2.9: LTE modulation constellations (adapted from [4]).
Binary Phase Shift Keying (BPSK) has been specified for control channels, which can opt between
BPSK or QPSK for control information transmission. Additionally, uplink control data is multiplexed along
with the user data, both type of data use the same modulation (i.e. QPSK, 16-QAM or 64-QAM).
2.4.3 Downlink User Data Transmission
The user data is carried on the PDSCH in the downlink direction with a 1 ms resource allocation. More-
over, the sub-carriers are allocated to resource units of 12 sub-carriers, totalling to 180 kHz allocation
units. Thus, the user data rate depends on the number of allocated sub-carriers; this allocation of re-
sources is managed by the eNB and it is based on the Channel Quality Indicator (CQI) obtained from
the terminal. Similarly to what happens in the uplink, the resources are allocated in both the time and
frequency domain, as it can be observed in Figure 2.10. The bandwidth can be allocated between 0 and
20 MHz with continuous steps of 180 kHz.
Figure 2.10: Downlink resource allocation at eNB (adapted from [4]).
The Physical Downlink Control Channel (PDCCH) notifies the device about which resources are
14
allocated to it in a dynamic fashion and with a 1 ms allocation granularity. PDSCH data can occupy
between 3 and 6 symbols per 0.5 ms slot, depending on both the PDCCH and on the cyclic prefix length
(i.e. short or extended). In the 1 ms subframe, the first 0.5 ms are used for control symbols (for PDCCH)
and the following 0.5 ms are used solely for data symbols (for PDSCH). Furthermore, the second 0.5
ms slot can fit 7 symbols if a short cyclic prefix is used.
Not only the available resources for user data are reduced by the control symbols, but they also
have to be shared with broadcast data and with reference and synchronization signals. The reference
symbols are distributed evenly in the time and frequency domains in order to reduce the overhead
needed. This distribution of reference symbols requires rules to be defined in order to both the receiver
and the transmitter can understand the mapping. The common channels, such as the BCH, also need
to be taken into account for the total resource allocation space.
The channel coding chosen for LTE user data was turbo coding, which uses the same encoder
(i.e. Parallel Concatenated Convolution Coding (PCCC)) type turbo encoder as used in WCDMA/High
Speed Packet Access (HSPA) [5]. The turbo interleaver of WCDMA was also modified to better fit the
LTE properties and slot structures, as well as to allow higher flexibility for implementing parallel signal
processing with increasing data rates. The channel coding consists in 1/3-rate turbo coding for user data
in both uplink and downlink directions. To reduce the processing load, the maximum block size for turbo
coding is limited to 6144 bits and higher allocations are then segmented to multiple encoding blocks.
In the downlink there is not any multiplexing to the same physical layer resources with PDCCH as they
have their own separate resources during the 1 ms subframe.
LTE uses physical layer retransmission combining, also commonly referred as Hybrid Adaptive Re-
peat and Request (HARQ). In such an operation, the receiver also stores packets with failed Cyclic
Redundancy Check (CRC) checks and combines the received packet with the previous one when a
retransmission is received.
After the data is encoded, it is scrambled and then modulated. The scrambling is done in order to
avoid cases where a device decodes data that is aimed for another device that has the same resource
allocation. The modulation mapper applies the intended modulation (i.e. QPSK, 16-QAM or 64-QAM)
and the resulting symbols are fed for layer mapping and pre-coding. For multiple transmit antennas,
the data is divided into two or four data streams (depending if two of four antennas are used) and then
mapped to resource elements available for PDSCH followed by the OFDM signal generation. For a
single antenna transmission, the layer mapping and pre-coding functionalities are not used.
Thus, the resulting instantaneous data rate for downlink depends on the:
• modulation method applied, with 2, 4 or 6 bits per modulated symbol depending on the modulation
method of QPSK, 16-QAM and 64-QAM, respectively;
• allocated amount of sub-carriers;
• channel encoding rate;
• number of transmit antennas with independent streams and MIMO operation.
15
Assuming that all the resources are allocated for a single user and counting only the physical layer
resources available, the instantaneous peak data rate for downlink ranges between 0.9 and 86.4 Mbps
with a single stream, that can rise up to 172.8 Mbps with 2 x 2 MIMO. For 4 x 4 MIMO it can also reach
a theoretical instantaneous peak data rate of 340 Mbps. The single stream and 2 x 2 MIMO bandwidths
can be observed on Table 2.1.
Table 2.1: Downlink peak data rates [5].
Peak bit rate per sub-carrier [Mbps] / bandwidth combination [MHz]
72/1.4 180/3.0 300/5.0 600/10 1200/20
QPSK 1/2 Single stream 0.9 2.2 3.6 7.2 14.416-QAM 1/2 Single stream 1.7 4.3 7.2 14.4 28.816-QAM 3/4 Single stream 2.6 6.5 10.8 21.6 43.264-QAM 3/4 Single stream 3.9 9.7 16.2 32.4 64.864-QAM 4/4 Single stream 5.2 13.0 21.6 43.2 86.464-QAM 3/4 2 x 2 MIMO 7.8 19.4 32.4 64.8 129.664-QAM 4/4 2 x 2 MIMO 10.4 25.9 43.2 86.4 172.8
2.4.4 Uplink User Data Transmission
The user data in the uplink direction is carried on the PUSCH, which has a 10 ms frame structure and
is based on the allocation of time and frequency domain resources with 1 ms and 180 kHz resolution,
respectively. The scheduler that handles this allocation of resources is located in the eNB, as can
be observed in Figure 2.11. Only random access resources can be used without prior signalling from
the eNB and there are no fixed resources for the devices. Accordingly, the device needs to provide
information for the uplink scheduler of its transmission requirements as well as its available transmission
power resources.
The frame structure uses a 0.5 ms slot and an allocation period of two 0.5 ms slots (i.e. subframe).
Similarly to what was discussed in the previous subsection concerning the downlink direction, user data
has to share the data space with reference symbols and signalling. The bandwidth can be allocated
between 0 and 20 MHz with steps of continuous 180 kHz, similarly to downlink transmission. The slot
bandwidth adjustment between consecutive TTIs can be observed in Figure 2.12, in which doubling the
data rate results in also doubling the bandwidth being used. It needs to be noted that the reference
signals always occupy the same space in the time domain and, consequently, higher data rate also
corresponds to a higher data rate for the reference symbols.
The cyclic prefix used in uplink can also either be short or extended, where the short cyclic prefix
allows for a bigger data payload. The extended prefix is not frequently used, as the benefit of having
seven data symbols is greater than the possible degradation that can result from inter-symbol interfer-
ence caused by channel delay spread higher than the cyclic prefix.
The channel coding for user data in the uplink direction is also 1/3-rate turbo coding, the same as in
the downlink direction. Besides the turbo coding, the uplink also has the physical layer HARQ with the
same combining methods as in the downlink direction.
16
Figure 2.11: Uplink resource allocation controlled by eNB scheduler (adapted from [4]).
Figure 2.12: Data rate between TTIs in the uplink direction (adapted from [4]).
Thus, the resulting instantaneous uplink data rate depends on the:
• modulation method applied, with the same methods available in the downlink direction;
• bandwidth applied;
• channel coding rate;
• time domain resource allocation.
Similarly to the previous subsection, assuming that all the resources are allocated for a single user
and counting only the physical layer resources available, the instantaneous peak data rate for uplink
ranges between 900 kbps and 86.4 Mbps, as shown in Table 2.2. As discussed in subsection 2.3.3, the
cell or sector specific maximum total data throughput can be increased with V-MIMO.
17
Table 2.2: Uplink peak data rates [4].
Peak bit rate per sub-carrier [Mbps] / bandwidth combination [MHz]
72/1.4 180/3.0 300/5.0 600/10 1200/20
QPSK 1/2 Single stream 0.9 2.2 3.6 7.2 14.416-QAM 1/2 Single stream 1.7 4.3 7.2 14.4 28.816-QAM 3/4 Single stream 2.6 6.5 10.8 21.6 43.216-QAM 4/4 Single stream 3.5 8.6 14.4 28.8 57.664-QAM 3/4 Single stream 3.9 9.7 16.2 32.4 64.864-QAM 4/4 Single stream 5.2 13.0 21.6 43.2 86.4
2.5 Mobility
This section presents an overview of how LTE mobility is managed for Idle and Connected modes,
as mobility is crucial in any telecommunications system; mobility has many clear benefits, such as
maintaining low delay services (e.g. voice or real time video connections) while moving in high speed
transportations and switching connections to the best serving cell in areas between cells. However, this
comes with an increased network complexity. That being said, the LTE radio network aims to provide
seamless mobility while minimizing network complexity.
Table 2.3: Differences between both mobility modes.
RRC IDLE RRC CONNECTED
Cell reselections done automatically by the UE Network controlled handoversBased on UE measurements Based on UE measurements
Controlled by broadcasted parametersDifferent priorities can be assigned to frequency layers
There are two procedures in which mobility can be divided, idle and connected mode mobility. The
former is based on UE being active and autonomously reselecting cells in accordance to parameters
sent by the network, without being connected to it; in the latter, the UE is connected to the network
(i.e. transmitting data) and the E-UTRAN makes the decision of whether or not to trigger an handover
according to the reports sent by the UE. These two states correspond respectively to the RRC IDLE
and RRC CONNECTED mode, whose differences are summarized in Table 2.3.
It is also important to mention these measurements that are performed by the UE for mobility in LTE:
• Reference Signal Received Power (RSRP), which is the averaged power measured in a cell
across receiver branches of the resource elements that contain reference signals specific to the
cell;
• Reference Signal Received Quality (RSRQ), which is the ratio of the RSRP and the Evolved
UMTS Terrestrial Radio Access (E-UTRA) Received Signal Strength Indicator (RSSI) for the refer-
ence signals;
• RSSI, which is the total received wideband power on a specific frequency and it includes noise
originated from interfering cells and other sources of noise. Moreover, it is not individually mea-
sured by the UE, yet it is used in calculating the RSRQ value inside the UE.
18
2.5.1 Idle Mode Mobility
In Idle mode, the UE chooses a suitable cell based on radio measurements (i.e. cell selection). When-
ever a UE selects a cell, it is camped in that same cell. The cell is required to have good radio quality
and not be blacklisted. Specifically, it must fulfil the S-criterion:
Srxlevel > 0, (2.1)
where
Srxlevel > Qrxlevelmeas − (Qrxlevmin −Qrxlevelminoffset), (2.2)
and Srxlevel corresponds to the Rx level value of the cell, Qrxlevelmeas is the RSRP, Qrxlevmin is the
minimum required level for cell camping and Qrxlevelminoffset is an offset used when searching for a
higher priority Public Land Mobile Network (PLMN) corresponding to preferred network operators. The
aforementioned offset is used because LTE allows to set priority levels for PLMNs in order to specify
preferred network operators in cases such as roaming.
As the UE stays camped in a cell, it will be continuously trying to find better cells as candidates for
reselection in accordance to the reselection criteria. Furthermore, the network can also block the UE to
consider specific cells for reselection (i.e. cell blacklisting). To reduce the amount of measurements, it
was defined that if the Rx level value of the serving cell (i.e. SServingCell) is high enough, the UE does
not need to make any intra-frequency, inter-frequency or inter-system measurements. The measure-
ments for intra-frequency and inter-frequency start respectively once that SServingCell ≤ Sintrasearch and
SServingCell ≤ Snonintrasearch, where Sintrasearch and Snonintrasearch refer to the serving cell’s Rx level
thresholds for the UE to start making intra-frequency and inter-system measurements, respectively.
For intra-frequency and equal priority E-UTRAN frequency cell selection, a cell ranking is made on
the Rs criterion for the serving cell and Rn criterion for the neighboring cells:
Rs = Qmeas,s +Qhyst, (2.3)
Rn = Qmeas,n +Qoffset, (2.4)
where Qmeas is the RSRP measurement for cell re-selection, Qhyst is the power domain hysteresis in
order to avoid the ping-pong phenomena between cells, Qoffset is an offset control parameter to deal
with different frequencies and/or cell specific characteristics (e.g. propagation properties and hierarchi-
cal cell structures). The reselection occurs to the highest ranking neighbor cell that is better ranked than
the serving cell for longer than Treselection, in order to avoid frequently made reselections. Through the
hysteresis provided byQhyst, a neighboring cell needs to be better than the serving cell by a configurable
amount in order to perform reselection. Lastly, the Qoffset allows bias for the reselection of particular
cells and/or frequencies.
Regarding both inter-frequency and inter-system reselection in LTE, they are based on the method
labeled as layers. Layers were designed to allow the operators to control how the UE prioritizes camping
on different Radio Access Technology (RAT)s or frequencies. This method is known as absolute priority
19
based reselection, where each layer is appointed a specific priority and the UE attempts to camp on the
highest priority layer that can provide a decent service. The UE will camp on a higher priority layer if it is
above a threshold Threshhigh — that is defined by the network — for longer than the Treselection period.
Furthermore, the UE will camp on a layer with lower priority only if the higher priority layer drops below
the aforementioned threshold and if the lower priority layer overcomes the threshold Threshlow.
2.5.2 Intra-LTE Handovers
As mentioned previously, the UE mobility is only controlled by the handovers when the Radio Resource
Control (RRC) connection is established. The handovers are based on UE measurements and are also
controlled by the E-UTRAN, which decides when to perform the handover and what the target cell will
be. In order to perform lossless handovers, packet forwarding is used between the source and the target
eNB. In addition, the S1 connection in the core network is only updated once the radio handover is
completed (i.e. Late path switch) and the core network has no control over the handovers.
Figure 2.13: Intra-frequency handover procedure (adapted from [4]).
The intra-frequency handover operation can be observed in Figure 2.13. In the beginning, the UE
has a user plane connection to the source eNB and also to the SAE Gateway (SAE GW). Besides that,
there is a S1 signalling connection between the MME and the eNB. Once the target cell fulfills the
measurement threshold, the UE sends the measurement report to the source eNB, which will establish
a signaling connection and GPRS Tunneling Protocol (GTP) tunnel towards the target cell. When the
target eNB has the required available resources, the source eNB sends an handover command towards
the UE. Once that is done, the UE can then switch from the source to the targeted eNB, resulting in a
successful update of the core network connection.
Before the Late path switching is completed, there is a brief moment when the user plane packets
in downlink are forwarded from the source eNB towards the target eNB through the X2 interface. In
the uplink, the eNB forwards all successfully received uplink Radio Link Control (RLC) Service Data
20
Unit (SDU) to the packet core and, furthermore, the UE re-transmits the unacknowledged RLC SDUs
from the source eNB.
Regarding the handover measurements, the UE must identify the target cell through its synchroniza-
tion signals before it can send the measurement report. Once the reporting threshold is fulfilled, the UE
sends handover measurements to the source eNB.
Figure 2.14: Automatic intra-frequency neighbor identification (adapted from [4]).
The UE in E-UTRAN can detect the intra-frequency neighbors automatically, which in turn resulted
in both a simpler network management and better network quality. The correct use of this functionality is
important as call drops due to missing neighbors are common. It can be observed in Figure 2.14, where
the UE approaches a new cell and receives its PCI through the synchronization signals. The UE then
sends a measurement report to the eNB once the handover report threshold has been reached. On the
other hand, the eNB does not have an X2 connection to that cell and the physical cell Identity (ID) is
not enough to uniquely identify that cell, as the maximum number of physical cell IDs is only 504 and
large networks can extend to tens of thousands of cells. Thereupon, the serving eNB requests the UE to
decode the global cell ID from the broadcast channel of the target cell, as it uniquely identifies that same
cell. Through the global cell ID, the serving eNB can now find the transport layer address alongside the
information sent by the MME and, thus, set up a new X2 connection, allowing the eNB to proceed with
the handover.
The generation of the intra-frequency neighborlist is simpler than creating inter-frequency or inter-
RAT neighbors, as the UE can easily identify all the cells within the same frequency. For inter-frequency
and inter-RAT neighbor creation, the eNB not only must ask the UE to make specific measurements for
them, but must also schedule gaps in the signal to allow the UE to proceed with the measurements.
2.5.3 Inter-system Handovers
LTE allows for inter-system handovers, also called inter-RAT handovers, between the E-UTRAN and
GSM EDGE Radio Access Network (GERAN), UMTS Terrestrial Radio Access Network (UTRAN) or
cdma2000 R©. The inter-RAT handover is controlled by the source access system in order to start the
21
measurements and to decide to perform or not the handover. This handover is carried out backwards
as a normal handover, due to the resources being reserved in the target systems prior to the handover
command being sent to the UE. Regarding the GERAN system, it does not support Packet-Switched
Handover (PS HO) as the resources are not reserved before the handover. The core network is respon-
sible for the signalling, because there are not any direct interfaces between these different radio systems.
The inter-RAT handover is similar to the one of intra-LTE where the packet core node is changed.
The information from the target system is transported to the UE in a transparent fashion through the
source system. To avoid the loss of user data, the user data can be forwarded from the source to the
target system. The UE does not perform any signalling to the core network and, thus, speeds up the
execution of the handover. Furthermore, the security and QoS context is transferred from the source
to the target system. Additionally, the Serving Gateway (GW) can be used as the mobility anchor for
inter-RAT handovers. An overview of the inter-system handover is represented in Figure 2.15.
Figure 2.15: Overview of the inter-RAT handover from E-UTRAN to UTRAN/GERAN (adapted from [4]).
2.6 Performance Data Collection
As telecommunication networks are becoming more and more complex, new monitoring and managing
operations need to be developed. There is now a set of methods that allows for data collection originated
from the networks. These methods not only grant a better planning and optimization of the networks,
but also allow to know if they are delivering the required quality to the users.
2.6.1 Performance Management
Performance Management (PM) consists on evaluating and reporting both the behaviour and effective-
ness of the network elements by gathering statistical information, maintaining and examining historical
logs, determining system performance and modifying the system modes of operation [12]. It was one of
the added concepts to the Telecommunication Management Network (TMN) framework defined by the
22
International Telecommunication Union (ITU), to manage telecommunication networks and services in
order to handle the growing complexity of the networks. The other concepts consist on security, fault,
accounting and configuration.
Performance Management (PM) involves the following:
• configuring data-collection methods and network testing;
• collecting performance data;
• optimizing network service and response time;
• proactive management and reporting;
• managing the consistency and quality of network services.
PM is the measurement of both network and application traffic in order to deliver a consistent and
predictable level of service at a given instance and across a defined period of time. PM enables the
vendors and operators to detect the deteriorating trend in advance and thus solve potential threats,
preventing faults [13]. The architecture of a PM system consists on four layers:
• Data Collection and Parsing Layer - where data is collected from Network Element (NE)’s using
a network specific protocol (e.g. FTP and Simple Network Management Protocol (SNMP));
• Data Storage and