View
7.496
Download
2
Category
Preview:
Citation preview
NFV Orchestration: Challenges in Telecom Deployments Shamik Mishra
OpenStack Summit, Vancouver, May 2015
Proprietary & Confidential. © Aricent 2015 2
Presenter
Shamik Mishra
Senior Engineering Project Manager, Aricent
Focus Area: Management and Network Orchestration for NFV
Identifying the requirements for NFV MANO
Specifying extensible, Modular architecture for NFV Service Orchestrator
Identifying possible requirements on the underlying Virtual Infrastructure Manager i.e. OpenStack
Proprietary & Confidential. © Aricent 2015
Agenda
NFV Implementation Challenges
End-to-End Service Monitoring of NFV
– Service Creation Model
– Service Aware Monitor
Cloud-RAN use case
Community Support
3
Proprietary & Confidential. © Aricent 2015
NFV: Quick Recap
Traditional
Separate network appliance / function
for each function
Dedicated hardware for each function
Unused computing capability
Separate management systems
Difficult to introduce new services rapidly
Higher power consumption and requires more real-
estate
NFV
Hosted on commodity hardware through virtualization
Scalable, elastic and efficient usage of resources
Possibility of unified management and orchestration
of services
Easily introduce new functions
Cost-effective
Each function virtualized and
hosted on commodity hardware
4
GWs Router
Firewall
Load
Balancer
Distribution
Switch IMS
MMEs
IT Application
Proprietary & Confidential. © Aricent 2015
NFV Implementation Challenges Source: Heavy Reading NFV Multi-Client Study 2014
5
Very concerned short term (within next 2 years)
Scalability of COTS Hardware
Power Consumption
Reliability of COTS hardware
Managing & Interworking various APIs
Cloud Orchestration & Management (Includes
Optimizing Resources)
Network Impact of virtualizing the media plane
Managing Service failover and recovery
Operations & backend integration (OSS Integration)
Packet processing performance (deterministic) in data
plane Security
KPI Impact related to media-sensitive applications (QoS,
jitter, latency)
Troubleshooting & Service Assurance
NFV Implementation Challenges How concerned is your company about the following technical challenges related to NFV?
Very concerned longer term (in 2 to 5
years) Only slightly concerned Not concerned at all
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Network functions
virtualization is not
just porting legacy
network functions to
commodity hardware
Proprietary & Confidential. © Aricent 2015
Realizing large scale NFV deployments,
replicating & monitoring them
Simplifying networking configuration to
ensure automated service delivery
Legacy network elements are often developed
for peak capacity and not for scaling. Also,
scaling does not necessarily depend on
infrastructure alone
Scaling one element in a chain of services may
require the need to touch / modify several other
elements (including legacy elements)
Monitoring Service Assurance
Scaling
6
Challenges in Monitoring, Service Assurance and Scaling
Proprietary & Confidential. © Aricent 2015 7
End-to-End Service Monitoring
Proprietary & Confidential. © Aricent 2015 8
End-to-End Service Monitoring in NFV Orchestration & Monitoring
Service Request Transform
Resource Requests
Resource Deployment
Resource Monitoring Monitoring Generation
Monitoring Collection
Monitoring Aggregation
Service Monitoring
Aggregation
Visualization Actions
Proprietary & Confidential. © Aricent 2015 9
Example
A deployed VNF chain may contain multiple instances of
VNF application on VMs, Load Balancers, Firewalls, etc.
They network connectivity is managed by software
switches
The chain can have instances deployed across compute
nodes and clouds
Clouds can be interconnected over VPN
SDN Controllers can create / manage the service chains
Monitoring data also needed for debugging problems in the
network
The VNFs can be from different vendors with separate
monitoring & troubleshooting mechanisms
End-to-End Service Monitoring in NFV Challenges
The health of the end-to-end chained service is necessary for the orchestrators & OSS to
ensure service continuity and to initiate automated actions
FW
VNF
1 Sw
itch
LB
Hyp
erv
isors
VNF
2
VNF
1’
VNF
2’
Possible Monitoring Points
Question? Do we need a new and unified monitoring
model for NFV
Proprietary & Confidential. © Aricent 2015 10
While scaling an element of a service
chain, we may need to touch/modify the
other elements in the chain
End-to-End Service Monitoring in NFV Impact to Scalability for a Service Chain
A
C
B
D
Becomes a bottleneck
B
Traffic Path
New Traffic
Path
Scaled Node
Entire Subgraph scaled
Modify
Modify
Scaling Orchestrators / devices who takes the decision for
scaling needs to evaluate the impact on the subsequent
sub-graph
Becomes a bottleneck
B
New Traffic
Path
Scaled Node
A
C
B
D Traffic Path
Modify C D
This could now become a bottleneck
Proprietary & Confidential. © Aricent 2015 11
What is monitored also can
depend on VNF Execution States
There can be multiple VNF
execution states like
– Install, Start, Stop, Run, Maintain
The chain’s state is a derivative of
the individual VNF states
We may want to monitor different
set of things as and when the
chain state changes
NFV Orchestrator will take actions
when the chain state changes like
reconfiguring the apps,
configuration etc.
End-to-End Service Monitoring in NFV Impact of VNF States to Monitoring
B’
VNF States
A
C B D
Modified
Monitoring
RUN RUN RUN RUN
Under upgrade
MAINT
Chain
State (A,RUN) (B,RUN) (C,RUN) (D,RUN)
Monitoring
State (Mon A RUN) (Mon BRUN) (Mon CRUN) (Mon DRUN)
(B’,MAINT)
(Mon B’RUN)
CH1 CH2
MON1 MON2
Proprietary & Confidential. © Aricent 2015 12
Translation of Forwarding Graph to Resource Requests
A C B E
KPI-1
D
F
KPI-2
KPI-3
KPI-4
Service Request - View
A
D
F
Router
SDN
Controller
C
B
E
Sw
itch
S
witc
h
Resource
Request(s)
View
Service Request would consist of a
– Network Forwarding Graph
– Set of Key Performance indicators (or
SLAs) for the service
The service request should get
decomposed to Resource
requests (example)
– Computing Resource Requests
– Networking Resource requests
– Placement Requests
The KPI set gets transformed into
parameters for the resource
requests
Proprietary & Confidential. © Aricent 2015 13
Service Creation Model
Service Orchestrator
Controllers
Infrastructure
Managers
Service Request
Resource Decomposition
Resource Requests
Service Requests are
Characterized by KPIs / SLAs
Decomposing Service Requests
to Resource Requests
– Configuration Requests
– Computing Requests
– Placement Requests
Decomposition is driven by the
KPIs
SDN Controller
Networking /
Chaining
Requests
Resource
Manager
• Computing
Requests
• Placement
Requests
Scheduler /
Placement Mgr.
OpenStack Services
(Neutron, Nova, etc..)
Placement
Requests
Resource Reservations
Network
Devices
Networking
Configuration
VNF Manager
VNF Configuration
Resource Instantiate
VNF
Service performance (adherence to
KPIs / SLAs) requires monitoring of
individual Resource Requests &
VNFs
Monitoring data acquired at each
layer should be “aggregated” to
determine the service performance
Other
Networking
Devices
Proprietary & Confidential. © Aricent 2015 14
App
2
App
3
App
4 App
1
App
5
Compute
Node-1
Compute
Node-2
Service-Aware Monitor
Infrastructure
Monitor
Service Monitoring Engine
Application specific
monitoring data
Infrastructure
monitoring data
Service Orchestration Model and
Implementation
Service
Templates
Should we
standardize these
interfaces???
Application specific
parameters in a container
Infrastructure
statistics
Monitor Service Chains
End-to-End Service Monitoring in NFV
Proprietary & Confidential. © Aricent 2015
Compute
15
Service Aware Monitor
NF3 Collector
NF4 Collector
NF5 Collector
Service Aware Monitor
Message Queue
Networking
NF3 NF4 NF5
API API MA MA MA
Infrastructure
Monitoring
Storage
API
Advanced
Services
MA
AS Collector
Metrics
Database
Monitoring Initiator & Aggregator
Service
Alarm
KPI Trend
Analysis Scaling
Decisions
Resource
Reconfigure
Service Self
Healing
Service
Orchestrator
Controllers
Proprietary & Confidential. © Aricent 2015 16
Service Orchestration & Monitoring Key Conclusions
• Decomposition of Service Requests to Resource Requests
• VNF state aware Monitoring initiation and collection
• Aggregation of Monitoring data
• Possible standardization of Monitoring Initiation and Collection
by VNF vendors and Open Source Community
• Service Aware Placement
• Modularity in Service Orchestrator Architecture
Proprietary & Confidential. © Aricent 2015 17
Cloud-RAN Use Case
Proprietary & Confidential. © Aricent 2015 18
Cloud RAN
Host RT Linux
KVM
Guest
RT Linux
PDCP,
RLC,
MAC
eNB1
Guest
RT Linux
L3
OAM
RRM
VM1 VM2
Guest
RT Linux
PDCP,
RLC,
MAC
eNB2
Guest
RT Linux
L3
OAM
RRM
VM1 VM2
Layer 1
Antenna
Antenna Site
Layer 1
Antenna
Antenna Site
Cloud: Centralized eNB Pool
System View LTE Layer-1 (Baseband
Unit) Processing
Proprietary & Confidential. © Aricent 2015
eNodeB 1 VM
RRM
OAM
eNodeB
L3
eNodeB
L2
S1 AP (192.16.81.53)
GTPU (192.16.81.51)
X2 AP (192.16.81.52)
Guest OS
Fedora
Host OS
Ubuntu
eNB L2 (192.16.81.58)
eNB1 Network
Compute
vEPC
eNB2
S1 AP (172.16.114.124)
GTPU (172.16.114.122)
X2 AP (172.16.114.123)
eNB L2 (172.16.114.120)
Compute
Layer 1
Antenna
Antenna Site
Cloud RAN OpenStack Orchestration
19
Proprietary & Confidential. © Aricent 2015 20
CPU Utilization
CPU load increases with
throughput and number of
users scheduled
– Move non-real time MAC
scheduling out
Setup
– 1 vCPU, 2GB Memory VMs
– Intel Core i3 processor
52
58
64
70 74
77
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60
CPU Utilization
UL Throughput (Mbps)
Proprietary & Confidential. © Aricent 2015
Cloud RAN OpenStack Orchestration Key Observations
Packet Buffering / loss. Solved through PCI pass-through approaches
Live VM migration of L2 Virtual Machine is a challenge, as MAC scheduler requires 1ms
of processing time
Scaling: More eNBs realized through launching more eNB VNFs
– If eNB mapped to 3 sectors, L3 layer can be common
– If single S1 interface required for all eNBs then scaling is limited
Scaling: CPU load increases with throughput and number of users scheduled
– Move non-real time MAC scheduling out
21
Proprietary & Confidential. © Aricent 2015 22
Community Support
Proprietary & Confidential. © Aricent 2015 23
Possible Community Actions
The tenant today never has any visibility into
its switch which connects the VMs of the
tenant (with OVS)
In the real world, the user controls its own
switch
Full control of its own switch would give
possibilities like
– Flexibility in defining service chains through SDN
controller by the tenant
– Defining custom monitoring
– Same network visibility as a dedicated switch
– Security settings like MAC based policies
Port mirroring is a key enabler for efficient NFV
troubleshooting and monitoring
Tenant should be able to mirror a port to debug
from the traffic exchanged between two VNFs
Neutron API may need to be developed to
initiate and terminate port mirroring by tenants
Switch-as-a-Service & Port Mirroring
Proprietary & Confidential. © Aricent 2015 24
Possible Community Actions
IMS is now widely used by operators to
provide packet-switched voice and other
multimedia services to customers.
SIP (Session Initiation Protocol) is used in IMS
to setup and teardown real-time voice and
video calls.
Huge SIP traffic needs to be balanced across a
group of IMS nodes to ensure scalability and
reliability of Telco IMS services.
To ensure a robust and scalable virtualized
IMS network, SIP LBaaS would be a much
needed service in OpenStack.
Reference: SIP Load-Balancing-as-a-Service
(LBaas). David Waiting
https://etherpad.openstack.org/p/telcowg-
usecase-SIP_LBaaS.
SIP Load Balancing as a Service
Proprietary & Confidential. © Aricent 2015
Headquarters
303 Twin Dolphin Drive
6th Floor
Redwood City, CA 94065
USA
Tel: +1 650 632 4310
Fax: +1 650 551 9901
Thank you.
Recommended