76

SP Use Cases for NFV and vCPEd2zmdbbm9feqrf.cloudfront.net/2015/usa/pdf/BRKSPG-2519.pdf · SP Use Cases for NFV and vCPE Enabling Service Agility via CSR1000V Matt Falkner, Distinguished

Embed Size (px)

Citation preview

SP Use Cases for NFV and vCPE

Enabling Service Agility via CSR1000V

Matt Falkner, Distinguished Engineer, Technical Marketing

BRKSPG-2519

Abstract

• Today, CPE provides a number of network functions such as firewall, access control, nat, policy management and VPN. CSR1000V can help service providers cut down the cost of CPE deployments and reduce their maintenance overhead by implementing selected network functions in software that can run on variety of industry standard servers. CSR1000V also offers NFV functionality leveraging Cisco's IOS-XE already proven, and time-tested deployment of this network OS in the field. The session will go over the fundamentals of virtual IOS-XE and its use cases for NFV and vCPE. The session will focus on virtual layer 3 to 7 features such as virtual Broadband Remote Access Server (vBRAS) or virtual Route Reflector (vRR).

Glossary Abbreviation Description Abbreviation Description Abbreviation Description

ACL Access Control List CFS Completely Fair scheduler DPM Distributed Power Management

ANCP Access Node Control Protocol CFS Customer Facing Services DRS Dynamic Resource Scheduling

ARP Address Resolution Protocol CGN Carrier Grade NAT DSCP DiffServe code Point

ASA Application Security Appliance CLI command Line Interface Eap Extensible Authentication Protocol

AVC Application Visibility & Control CM Chassis Manager (in IOS XE) EOAM Ethernet OAM

BFD bi-directional Forwarding detection CoA RADIUS Change of Authorization ESA Email Security Appliance

BPDU Bridge Protocol Data Unit COS Class of Service ESC Elastic Services Controller

BRAS Broadband Remote Access Server COTS Common off-the-shelf ESXi VMWare hypervisor

BSS Business support system CPS Calls per second EVC Ethernet Virtual Circuit

CAPEX Capital Expenditures DC Data Center F/D/C Fibre / DSL / Cable

CDP Cisco Discovery Protocol DCI Data Center Interconnect FFPFast Forwarding Plane (data plane in IOS XE)

CE Carrier Ethernet DHCP Dynamic host configuration Protocol FLR Frame Loss Rate

CE Customer Edge DNS Domain Name System FM Forwarding Manger (in IOS XE)

CEF Cisco Express Forwarding DPDK Data Path Development Kit FSOL First Sign of life

CFMConfiguration and Fault Management

DPI Deep Packet Inspection FT Fault Tolerance

Glossary Abbreviation Description Abbreviation Description Abbreviation Description

FW Firewall IPoE IP over Ethernet LRO Large Receive Offload

GRE Generic Route Encapsulation IPS Intrusion Prevention System MC PfR Master Controller

GRT Global routing table IRQ Interrupt Request MP-BGP Multiprotocol BGp

GSO Generic Segmentation Offload ISG Intelligent Services Gateway MPLS EXPMulti Protocol Label Switching EXP field

GTm Go-to-market ISG TC ISG Traffic class MS/MR LISP Map Server / Map Resolver

HA High Availability IWAN Intelligent WAN (Cisco Solution) MSP Managed Service Provider

HQF Hierarchical Queueing Framework KSM kernel same-page merging MST Multiple Spanning Tree

HQOS Hierarchical QOS KVM Kernel Virtual Machine NAT Network Address Translation

HSRP Hot Standby Routing Protocol L2TPv2 Layer 2 Transport Protocl version 2 NB Northbound

HT Hyperthreading LAC L2TP Access Concentrator NE Network Element

HV Hypervisor LAG Link Aggregation NF netflow

I/O Input / Output LB Loadbalancer NfV Network Function Virtualization

IDS Intrusion Detection System LCM Life-cycle manager (for VNFs) NFVI NFV Infrastructure

IP SLA IP Service Level Agreements LNS L2TP Network Server NFVO NFV Orchestrator

IPC inter-process communication LR Loss Rate NIC network Interface card

Glossary Abbreviation Description Abbreviation Description Abbreviation Description

NID Network Interface Device PnP Plug and Play RSO Receive Segmentation Offload

NSO Network Services Orchestration POF Prime Order Fulfilment Rx Receive

NUMA non-uniform memory access PoP Point of presence SB Southbound

NVRAMnon-volatile Random Access Memory

PPE Packet Processing Engine SBC Session Border Controller

OAMOperations, administration and maintenance

PPS Packets per Second SC Service chaining

OPEX Operational Expenditures PSC Prime Services Catalog SDN Software Defined Networking

OS OpenStack PTA PPP termination and Aggregation SFService Function (in SFC Architecture)

OSS Operations support System PW Pseudowire SFC Service Function Chaining

OVS Open Virtual Switch PxTR Proxy Tunnel Router (LISP) SFFService Function Forwarder (in SFC Architecture)

PBHK Port Bundle host key (ISG feature) QFP Quantum Flow Processor ( SGT Security Group Tag

PE Provider Edge QOS Quality of Service SIP SPA Interface Processor

PF Physical Function (in SR-IOV) RA Remote Access SLA service level agreement

PfR Performance Routing REST Representational State Transfer SLB Server Loadbalancing

PMD Pull mode driver RFS Resource Facing Services SMB small and medium Business

pNIC Physical NIC RR Route Reflector SNMPSimple Network Management Protocol

Glossary Abbreviation Description Abbreviation Description Abbreviation Description

SP Service Provider VM Virtual Machine WAN Wide Area Network

SPA Shared Port Adapter vMS Virtual Managed Services WLAN Wireless LAN

SR-IOV single Root I/O virtualization VNF Virtual Network Function WLC Wireless LAN Controller

TCO Total Cost of Ownership VNFM VNF Manager WRED weighted random Early Detection

TOS Type of Service vNIC virtual NIC ZBFW Zone-based firewall

TPS transparent page sharing VPC Virtual Private Cloud ZTP Zero touch provisioning

TSO TCP Segmentation Offload vPE-F virtual PE Forwarding instance

TTM Time-to-market VPLS Virtual Private LAN service

UC Unified communication VPN Virtual Private Network

vCPE virtual CPE VRF virtual routing and forwarding

vCPU virtual CPU vSwitch virtual Switch

VF virtual Function (in SR-IOV) VTC Virtual Topology controller

vHost virtual host VTF Virtual Topology Forwarder

VIM Virtual Infrastructure Managers VTS Virtual Topology System

VLAN virtual Local area network WAAS Wide Area Application Services

• Introduction

• CSR 1000v System Architecture

• vCPE Network Architectures and the vMS Solution

• Virtualizing BRAS, LAC, LNS or Route Reflectors

• Conclusion

Agenda

Introduction

Network Functions Virtualization (NFV)

10

Announced at SDN World Congress, Oct 2012

• AT&T

• BT

• CenturyLink

• China Mobile

• Colt

• Deutsche Telekom

• KDDI

• NTT

• Orange

• Telecom Italia

• Telstra

• Verizon

• Others TBA…

…NFV decouples the network functions such as NAT, Firewall, DPI, IPS/IDS, WAAS, SBC, RR etc. from proprietary hardware appliances, so they can run in software. …..It utilizes standard IT virtualization technologies that run on high-volume service, switch and storage hardware to virtualize network functions..…..It involves the implementation of network functions in software that can run on a range of industry standard server hardware, and that can be

moved to, or instantiated in, various locations in the network as required, without the need for installation of new equipment.

What is NfV? A Definition

Sources:

https://www.sdncentral.com/which-is-better-sdn-or-nfv/

http://portal.etsi.org/nfv/nfv_white_paper.pdf

Service

Orchestration

NFVSDN X86

compute

CSR 1000v System Architecture

Cisco CSR 1000V – Virtual IOS XE Networking

Single-tenant WAN Gateway

• Small Footprint, Low Performance

IOS XE Cloud Edition

• IOS XE features for Cloud and NfV Use Cases

Infrastructure Agnostic

• Server, Switch, Hypervisor

Rich Network Services

• Routing, VPN, App Visibility & Control, DC Interconnect, and more

Perpetual, Term, Usage-based Licenses

• Elastic Capacity (Throughput)

Programmability

• RESTful APIs for Automated Management

Server

Hypervisor

Virtual Switch

VPC/ vDC

OS

App

OS

App

CSR 1000V

Rapid Deployment and Flexibility

Architecture (CSR 1000v) - virtualized IOS XE Virtualized IOS XE

Generalized to work on any x86 system

Hardware specifics abstracted through a virtualization layer

Control Plane and Data Plane mapped to vCPUs

Bootflash: NVRAM: are mapped into memory from hard disk

No dedicated crypto engine – we leverage the Intel AES-NI instruction set to provide hardware crypto assist.

Boot loader functions implemented by GRUB

Packet path within CSR 1000v

1. Ethernet driver (ingress)

2. Rx thread

3. PPE Thread (packet processing)

4. HQF Thread (egress queuing)

5. Ethernet driver (egress)

Control PlaneForwarding Plane

vNICvCPU vMemory vDisk

Physical Hardware

CPU Memory Disk NIC

Hypervisor (VMware / Citrix / KVM / Microsoft)

Chassis Mgr.

Forwarding Mgr.

IOS

Chassis Mgr.

Forwarding Mgr.

FFP Client / Driver

FFP code Linux Container

DataCtrl

• Runs as a process under the Guest Linux Kernel• IOS timing is governed by Linux Kernel scheduling

• Provides virtualized management ports• Since these are managed by their respective software

processes

• No direct hardware component access!

• Communicates with other software processes via IPC

• Runs Control plane features• CLI and configuration processing

• SNMP handling

• Running routing protocols & computing routes

• Managing interfaces and tunnels

• Session management

• Processing of punted features (legacy protocols)

IOSControl PlaneForwarding Plane

vNICvCPU vMemory vDisk

Physical Hardware

CPU Memory Disk NIC

Hypervisor (VMware / Citrix / KVM)

Chassis Mgr.

Forwarding Mgr.Chassis Mgr.

Forwarding Mgr.

FFP Client / Driver

FFP code

IOS

REFERENCE

• CM on forwarding plane communicates with peer CM Processes on data plane• Distributed function

• Initializes hardware components and boots various other processes• Setting up a virtual chassis file system

• Enabling IOS

• Initialize bootstrapping

• Simulates a SPA/SIP slot for interface discovery• Also maps virtual interfaces into IOS

• Communicates with IOS to make it aware of the hardware components

• Monitors environmental variables and alarms

Chassis Manager (CM)Control PlaneForwarding Plane

vNICvCPU vMemory vDisk

Physical Hardware

CPU Memory Disk NIC

Hypervisor (VMware / Citrix / KVM)

Forwarding Mgr.

IOS

Forwarding Mgr.

FFP Client / Driver

FFP code

Chassis Mgr.Chassis Mgr.

REFERENCE

• FM on forwarding plane communicates with peer FM on data plane• Distributed control function

• Propagates control plane operations to the forwarding plane• Exports forwarding information to / from control plane to

forwarding plane(CEF tables, ACLs, NAT…)

• Maintains its own copy of forwarding state tables

• Communicates state information back to control plane• e.g. statistics

Forwarding Manager (FM)Control PlaneForwarding Plane

vNICvCPU vMemory vDisk

Physical Hardware

CPU Memory Disk NIC

Hypervisor (VMware / Citrix / KVM)

Chassis Mgr.

IOS

Chassis Mgr.

FFP Client / Driver

FFP code

Forwarding Mgr.Forwarding Mgr.

REFERENCE

FFP Client/Driver and μcode

Fast Forwarding Processor (FFP) Client• Allocates and manages resources on forwarding plane

(data structures, memory, scheduling hierarchy)

• Receives requests from IOS via control plane

• Re-initializes FFP (QFP) and its memory if a software error occurs

FFP Driver• Provides low-level access and control to FFP (register

access)

• Provides communication path between FFP client and FFP via IPC

FFP microcode (μcode)• Implements data plane

• Feature Invocation Array determines feature ordering

Control PlaneForwarding Plane

vNICvCPU vMemory vDisk

Physical Hardware

CPU Memory Disk NIC

Hypervisor (VMware / Citrix / KVM)

Chassis Mgr.

Forwarding Mgr.

IOS

Chassis Mgr.

Forwarding Mgr.

FFP Client / Driver

FFP code

REFERENCE

Technology Package IOS-XE Features

IPBase

Basic Networking: BGP, OSPF, EIGRP, RIP, ISIS, IPv6, GRE, VRF-LITE, NTP, QoS

High Availbility: HSRP, VRRP, GLBP

Addressing: 802.1Q VLAN, EVC, NAT, DHCP, DNS

Basic Security: ACL, AAA, RADIUS, TACACS+

Management: IOS-XE CLI, SSH, Flexible NetFlow, SNMP, EEM, NETCONF

SECIPBase Plus…

Multicast: IGMP, PIM

Advanced Security: Zone Based Firewall, IPSec VPN, EZVPN, DMVPN, FlexVPN

AppX

IPBase Plus…

Advanced Networking: L2TPv3, BFD, MPLS, VRF, VXLAN

Application Experience: WCCPv2, AppXNAV, NBAR2, AVC, IP SLA

Hybrid Cloud Connectivity: LISP, OTV, VPLS, EoMPLS

AX ALL FEATURES

CSR 1000v Feature Support and Technology Packages

REFERENCE

CSR 1000v IOS XE Threads to vCPU Associations

• IOS XE processing threads in the Guest OS are statically mapped to vCPUsthreads

• vCPU threads in turn are allocated to physical cores by the hypervisor scheduler

CSR footprint Control Plane Data Plane PPE Data Plane HQF Data Plane Rx processing

1 vCPU 0

2 vCPU 0 vCPU 1

4 vCPU 0 vCPU 1 & 2 vCPU 3

8 vCPU 0 vCPU 1-5 vCPU 6 vCPU 7

NOTE: vCPU allocations subject to change without further notice

CSR 1000V Performance-to-Footprint in IOS-XE 3.14

• For each throughput/technology-package combination, the minimum required vCPU and RAM is listed

• Performance results based on 1500 Byte packets and VMWare ESXi

Throughput IP Base SEC AppX AX

10 Mbps 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB

50 Mbps 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB

100 Mbps 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB

250 Mbps 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB

500 Mbps 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB

1 Gbps 1vCPU/4GB 1vCPU/4GB 1vCPU/4GB 2vCPU/4GB

2.5 Gbps 1vCPU/4GB 1vCPU/4GB 4vCPU/4GB 4vCPU/4GB

5 Gbps 1vCPU/4GB 2vCPU/4GB 8vCPU/4GB NA

10 Gbps 2vCPU/4GB NA NA NA

• With IOS XE 3.13, CSR 1000v package names are now: IPBase, Security, AppX and AX

• ‘license boot level’ command adjusted accordingly

• Old CLI commands are hidden but still accepted (‘[premium | advanced | standard]’)

• Smart Licensing

• Evaluation licenses can be generated for 60 days using the demo portal (www.cisco.com/go/license)

• Require UDI

• Two licenses: 50Mbps for AX, 500Mbps for IPBase

• After evaluation period expires, throughput will be throttled to 100Kbps

• See http://www.cisco.com/c/en/us/td/docs/routers/csr1000/software/configuration/csr1000Vswcfg/licensing.html for license management details

License Management Overview

IPBASE

Security AppX

AX

BB CGN4G

MEM

1 Year

3 Year

Perpetual

Perpetual Only

Virtualization and Hypervisor Interactions

UCS

Blade

Blade

Phy i/f Phy i/f

CPUCore Core

Hypervisor

VMCSR

vCPU

CPU

Core Core

vCPUvCPUvCPU

Scheduler

Vswitchport port

Memory

vMem

Tables

VNIC

VMCSR

VNIC

vMem

Tables

• Hypervisor abstracts and shares physical hardware resources from / among multiple VMs

• Scheduling of vCPUs onto physical cores can create non-deterministic behavior

• Scheduling of vNICs onto physical ports can lead to packet losses / jitter

• Multiple VMWare settings control resource allocations, e.g.

Number of vCPUs per VM

Min cycles per vCPU / pinning

vSwitch loadbalancing settings

CSR 1000v and Hypervisor Processing Relationships

• Example: 3 CSR VMs scheduled on a 2-socket 8-core x86

– Different CSR footprints shown

• Type 1 Hypervisor

– No additional Host OS represented

• HV Scheduler algorithm governs how vCPU/IRQ/vNIC/VMKernelprocesses are allocated to pCPUs

• Note the various schedulers

– Running ships-in-the-night

Pro

ce

ss

Qu

eu

e

HV Scheduler

Core0

pCPU1 pCPU2 pCPU3 pCPU4

pCPU5 pCPU6 pCPU7 pCPU8

vCPU12

vCPU03

vNICn2

VM Kernel1

Core1

pCPU1 pCPU2 pCPU3 pCPU4

pCPU5 pCPU6 pCPU7 pCPU8

vSwitch

VM1(4vCPU CSR 1000v)

CS

RIOSFman /

CMan

PP

EHQF Rx

vCPU01 vCPU1

1 vCPU31 IRQ1 vNIC1

1 VM Kernel1

PP

E

vCPU21 vNICn

1

Guest OS Scheduler

Pkt Scheduler

VM2(1vCPU CSR 1000v)

vCPU02 IRQ2 vNIC1

2 VM Kernel2vNICn2

CS

RIOSFman /

CMan

PP

EHQF Rx

Guest OS Scheduler

Pkt Scheduler

VM3 (2vCPU CSR 1000v)

vCPU03 IRQ3 vNIC1

3 VM Kernel3vNICn3

CS

RIOSFman /

CMan

PP

EHQF Rx

Guest OS Scheduler

vCPU13

Pkt Scheduler

x86 machine

Host-OS /

KVM

Qemu /

v-Host

tap

vSwitch (OVS) / Linux bridge

NIC driver

Guest-OS

Virtio-net

Guest-OS

Virtio-net

Qemu /

v-Host

tap

AppAppAppAppAppApp

• KVM+Ubuntu getting traction as hypervisor • Open source

• Hypervisor virtualizes the NIC hardware to the multiple VMs

• Hypervisor scheduler responsible for ensuring that I/O processes are served.

• One vHost/VirtIO thread used per configured interface (vNIC)• May become a bottleneck at high data

rates

KVM+Ubuntu Architecture Overview

NIC port

• Use a Cisco UCS VMFex NIC (or intel equivalent)

• Use a Direct path I/O technology (SR-IOV w/ PCIe pass-through)

• Apply CPU tuning!

• Do not oversubscribe physical NIC

• Use the latest version of the processor to benefit from virtualization enhancements

• Follow VMWare networking best practices, in particular• Latency sensitivity feature http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-networking-guide.pdf

• Hardware BIOS Setting

• Virtual Interrupt Coalescing

• Disabling Hyperthreading

• RX/TX Buffer tuning

• Disabling Pause frames

• Interrupt throttling

• Disabling IPv6

• Any other recommendations under http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf

• Disable TPS in ESXi

• TRADEOFF between VM scale and VM Performance

VMWare Performance Recommendations

REFERENCE

KVM Performance Tuning Recommendations

• Use a Direct path I/O technology (SR-IOV w/ PCIe pass-through) with CPU tuning below! Otherwise the following configurations are recommended:

Tuning

Recommendation

Details / Commands Tuning

Disable Hyperthreading Can be done in BIOS CPU

Find I/O NUMA Node cat /sys/bus/pci/devices/0000:06:00.0/numa_node

Enable isolcpus run command “numactl -H” CPU

Pin vCPUs ‘sudo virsh vcpupin test 0 6’ CPU

Set CPU in performance Mode run /etc/init.d/ondemand stop. CPU

Set Procsessor into pass-

through

virsh edit <vm name>

add this line <cpu mode='host-passthrough' />

CPU

Disable IRQ Balance run “service irqbalance stop”. CPU

NUMA-aware VM edit vm config by virsh edit <VM name>.

<vcpu placement='static' cpuset='8-15'>1</vcpu>

CPU

IRQ Pinning find specific nic interrupt number from /proc/interrupts. set affinity to other core than

pinned cpu than for CPU and vHost pinning

CPU

REFERENCE

KVM Performance Tuning Recommendations (cont.)

Tuning

Recommendation

Details / Commands Tuning

Pin vHost processes ‘sudo taskset -pc 4 <process Number>’,

Where <process Number> is found using ‘ps -ef | grep vhost’

I/O

Change vnet txqueue length to

4000

Default tx queue length is 500

‘sudo ifconfig vnet1 txqueuelen 4000’

I/O

Turn off TSO, GSO, RSO, ‘ethtool -K vnet1 tso off gso off gro off’ I/O

Disable KSM echo 0 > /sys/kernel/mm/ksm/run

NOTE: these settings may impact the number of VMs that can be instantiated on a server / blade

REFERENCE

x86 Host

NIC

Host-OS / KVM

Guest-OS Guest-OS Guest-OS

driver driver

I/O Optimizations: Direct-map PCI (PCI pass-through)

• Physical NICs are directly mapped to a VM

Bypasses the Hypervisor scheduler layer

PCI device (i.e. NIC) no longer shared among VMs

Typically, all ports on the NIC are associated with VM

Unless NIC supports virtualization

• Caveats:

Limits the scale of the number of VMs per blade to ‘number of physical NICs per system’

Breaks live migration of VMs

AppAppAppAppAppApp

AppAppApp

driver

NIC NIC

• Allows a single PCIe devices to appear to be multiple separate PCIe devices

NIC supports virtualization

• Enables network traffic to bypass software switch layers

• Creates physical and virtual functions (PF/VF)

PF: full featured PCIe

VF: PCIe without configuration resources

Each PF/VF gets a PCIe requestor ID s.t. IO memory management can be separated between different VFs

Number of VFs dependent on NIC (O(10))

• Ports with the same (e.g. VLAN) encap share the same L2 broadcast domain

• Requires support in BIOS/Hypervisor

I/O Optimizations: Single Root IO Virtualization - SR-IOV with PCIe pass-through

x86 Host

NIC

Host-OS / KVM

Guest-OS Guest-OS Guest-OS

layer-2 sorter / switch / classifier

VF VF VF PF

VF driver VF driver VF driver

AppAppAppAppAppApp

AppAppApp

SR-IOV

Master

Driver

• hypervisor virtualizes the NIC hardware to the multiple VMs

• Shared single instance of physical NIC hardware, including queues

• many to one relationship between the VM’s vNICand the single physical NIC.

• Possible performance bottleneck at high data rates

• VMFEX/VIC1280/Direct Path card has hardware resources dedicated to each VM

• The hypervisor’s virtualization layer and vswitch is bypassed.

• one to one relationship between the VM and the hardware resources

• eliminates contention between VMs for access to the virtualized physical NIC.

• significantly higher throughput & lower latency

I/O through Hypervisor vs. VMFex w/ DirectPath

VM VM VM

pNIC

Hypervisor

VM VM VM

pNIC

Hypervisor

VIC1280HW Queues

Asic

• DirectPath I/O: allow VMs to directly access hardware devices (e.g. NICs)

No longer requires emulated NIC (E1000) or para-virtualized NIC (VMXnet3)

Can achieve higher throughput at lower CPU cycles

Can use hardware features of certain physical NICs

• Caveats: Does not support all features

Physical NIC sharing

Memory overcommit

vMotion

• Recommended for applications with high packet rates

I/O Optimizations: Direct Path I/O for ESXi with VMFex

Feature DirectPath

I/O

Support with

UCS VMFex

DirectPath I/O

Suspend & Resume ✖ ✔

Record & Replay ✖ tbd

Fault Tolerance ✖ tbd

High Availability ✖ ✔

DRS ✖ ✔

Snapshots ✖ ✔

Hot add/remove of VM ✖ ✔

vMotion ✖ ✔

CSR

1000V

vCE

PE

WAN

Router

VPC/ vDC

Use Case: Cloud CE/PE Router

MPLS

Servers

Segment A

Segment B

DC

Fabric

Tenant Scale

CSR

1000V

vPE

PE

WAN

Router

VPC/ vDC

MPLS

Servers

Segment A

Segment B

DC

Fabric

VLAN

MPLS

IPoVLAN, IPoIP, MPLSoVLAN, MPLSoIP (IP=GRE, VXLAN, etc.)

MP-BGP

Benefits

• More Tenants per Physical Infrastructure

• End-to-end Managed Connectivity and SLAs

Challenges

• Mapping tenant traffic from VRFs to VLANs

• Maximum 4,096 VLANs limits scalability

CSR

1000V

WAN

Router

Switches

ServersCSR

1000V

VPC/ vDC

VPC/ vDC

Cloud Provider’s Data Center Challenges

• Inconsistent Security

• High Network Latency

• Limited Scalability

Use Case: Secure VPN Gateway• Benefit: Scalable, Dynamic, and Consistent Connectivity with the Cloud

Public WAN VPN tunnel

Benefits

• Direct, Secure Access

• Scalable, Reliable VPN

• Operational Simplicity

Solutions

• IPSec VPN, DMVPN, EZVPN, FlexVPN

• Routing and Addressing

• Firewall, ACLs, AAA

ISR

ISR

ASR

DC

Branch

Branch

Network Services

CSR

1000V

WAN

Router

Switches

ServersCSR

1000V

VPC/ vDC

VPC/ vDC

Optimized TCP connection

Cloud Provider’s Data Center

Use Case: Traffic Control and Management

• Benefit: Comprehensive Networking Services Gateway in the Cloud

vWAAS

HSRP

WAAS

WAAS

WAAS

DC

ASR

ISR

ISR

Branch

Branch

Network Services

Challenges

• Response Time of Apps

• Resource Guarantees

• Resilient Connectivity

Benefits

• Rich Portfolio of Network Features and Services

• Single Point of Control

Solutions

• AppNav for WAAS

• QoS Prioritization

• HSRP VPN Failover

vCPE Network Architectures and the

vMS Solution

Ethernet Agg SP Core

Managed CPE Extended Deployment Models

Customer Premise

CUBECUBE

On-premise Appliances / integrated Services• Router: Routing, ACL, NAT, SNMP..• Switch: port aggregation• Services realized with appliances• Full redundancy• Could be multi-vendor (Best of breed)

WAAS, FW,

UC, …

F/D/C

F/D/C

F/D/C = Fibre / DSL / Copper

Ethernet Agg SP Core

L3 or L2 Private-cloud Branch • L3 router remains in branch but performs

minimal functions• L4-7 services virtualized in the private cloud• Branch router tightly coupled with virtual

router in the private cloud for services

Routing, QoS,

FW, NAT..

Customer

Premise

Customer

PremiseFW, NAT..

F/D

Ethernet Agg SP Core

Customer

Premise (v)Router + virtualized L4-7 services• Router: Routing, ACL, NAT, SNMP• Services virtualized on UCS-E: FW, WAAS,• Could be multi-vendor (Best of breed) • Router could be virtualized too!

F/D

37

Why Move Services into the SP Network?

Reduce costs, and consolidate by

virtualizing services.

Simple, stateless branch hardware. Ship it, plug it in,

done!

Eliminate equipment silos at each site.

Increase managed network

functionality, while reduced per-site

costs.

Evolve/upgrade managed service offerings without changing CPE

devices.

“Slim” cloud CPE hardware portfolio

to fit branch locations.

Unified management spanning all branches.

• Not a replacement for entire CPE portfolio, but rather a complementary solution (for ‘vanilla’ services)

vCPE Creates Four New Revenue LeversLever 1:

Expand Cust. BaseLever 2:

Capture SMB MarketLever 3:

Reduce ChurnLever 4:

Increase ARPU

• Faster TTM enables

more efficient use of

resources

• SP can reach out and

close more deals with

existing resources

• SMBs need different

value proposition and

GTM than enterprises

• Cloud CPE enables

better SMB value

proposition and more

effective GTM

• Cloud CPE improves

service experience

• Less downtime, faster

issue resolution, etc

• Happy customers are

less likely to churn

• Cloud CPE – services

are delivered and

managed centrally

• Easier for customers

to order new services

Existing

Customer

Base

(CPE)New

Cloud CPE

customers

Expanding Customer Base

Enterprise

Market

Segment

SMB

Market

Current Market New MarketLayering New Services:CPE Churn Cloud CPE Churn

+

39

• vCPE

• Performs some / all of the L3 functions previously

executed by an on-premise physical CPE

• Location: either in SP PoP or in Data Center

• Can be run in single-tenant or Multi-tenant mode

• Provide Edge router either switches VLAN locally or

tunnels the VLAN to the DC

• CPE-Lite

• in either L2 or L3 Mode

• Minimal functions to reduce operational complexity

• SP aggregation network assumed to be Carrier

Ethernet

• Transparently transports Ethernet frames to the PE

• NOTE: CPE-lite and vCPE are tightly coupled

through a tunnel

• CPE-lite does not selectively forward only subsets of flows

to the vCPE

• => Main difference to cloud connector / NfV architecture

vCPE Architecture Building Blocks

PoP

SP Aggregation

DCVMs

MSECPE-Lite

Branch

CPE-Lite

BranchMSE

SP Core

vCPE

SP Aggregation

VMs

vCPE

Pro’s

• Standardized x86 hardware to deliver add-on services

• Continued (& extended?) use of installed on-premise CPE

• Faster time to deploy new services• Once physical layer provisioned, can easily add/remove/change

services

• Provisioning automation / Programmability

• Service flexibility• De-coupling of networking functions from hardware

• New service opportunities with Service chaining

• Better resource granularity

• Can introduce IPv6 services faster even if current on-premise device does not support IPV6

Con’s

• Operation of two L3 functions in case of L3-based CPE-lite

• L2-based CPE-lite has architectural challenges

• Potentially increased latency & bandwith

• Longer traffic paths for on-premise traffic

• Premise-to-cloud-to-destination

• Latency from hypervisors

• System Integration Effort

• Complexity of service chaining

Cloud-based off-premise CPE: Technical Pro’s and Con’s

vManaged Services Use Cases

Thick CPE

Thin CPE

Meraki

vMeraki

FunctionalityL3

“classic” L3 CPE + x86

on premise

L3 CPE + cloud

managed

Simple L3

CPE

vMeraki on

X86 on premL2 NID

Network Functions on CPEVirtual Network Functions from the cloud

• MSP is offering single or multiple FE port• Including integrated local switching

• CPE-lite is tightly coupled with vCPE via an IP Tunnel (e.g. GRE, L2TPv2)

• Uplinks are redundant nxFE or GE

• L3 CPE-lite connectivity to the SP infrastructure is purely based on Gig Ethernet

• L3 CPE-lite offers :• Connectivity

• IP Manageability (TACACS+, AAA, OAM)

• (H)QoS

• Optional IPSec encryption

• Some basic uplink HA

• Netflow optional

• No routing, services (NAT, Firewall, IPSLA, Netflow..),

vCPE L3-based CPE-Lite Architecture

Customer

Premise

Ethernet Agg SP Core

Routing, QoS, FW,

NAT..

L3

Single-tenant vCPE + L3 CPE-lite

C800 ASR 9000 L2 DC UCS/vCPE

Carrier Ethernet Ethernet/

VLAN

IP Tunnel (e.g. IPSec, GRE, L2TPv2)ASR 9000L2 DC

IP

Eth

Phy

Eth

Phy

.1Q

Phy

.1Q

Phy

QinQ

Phy

QinQ

Phy

.1Q

Phy

.1Q

Phy

IP

.1Q

Phy

.1Q

Phy

.1Q

Phy

QinQ

Phy

.1Q

Phy

.1Q

Phy

IP

.1Q

Phy

VRF

CPE-lite

• Local routing

• Encryption

• Minimal config

Ethernet Transport Network:

• MPLS/TP

• QinQ imposition

PE

• PE could route customer

VLAN to vCPE -> vCPE

architecture becomes vCE

architecture

• CPE-lite and vCPE are

then NOT tightly coupled

DC Underlay:

• Ethernet /L3 based

• QinQ imposition

vCPE

• Provides additional services

PE

• Terminate customer VLANs

into VRF or GRT

DC Underlay:

• Same DC underlay

IP

L2TPv2 L2TPv2

REFERENCE

• MSP is offering FE/GE port as a demarcation point to multiple customers (e.g. in basement)

• Uplinks are FE or GE

• NID connectivity to the SP infrastructure is purely based on Gig Ethernet

• All traffic transparently sent to SP Infrastructure / vCPE

• NID offers feature set:

– Connectivity

– L2 Security (L2ACL, Storm control, BPDU guard)

– IP Manageability (TACACS+, AAA, OAM)

– COS

– No routing, services (NAT, Firewall, IPSLA, Netflow..), L3 HA

vCPE L2-NID Architecture

Customer

Premise

Ethernet Agg SP Core

Routing, QoS,

FW, NAT..

L2

Single-tenant vCPE + L2 CPE-lite Protocol Stack

ME1200 ASR 9000 L2 DC UCS/vCPE

Carrier Ethernet Ethernet/

VLAN

VLANQinQ

ASR 9000L2 DC

IP

Eth

Phy

Eth

Phy

.1Q

Phy

.1Q

Phy

QinQ

Phy

QinQ

Phy

.1Q

Phy

.1Q

Phy

IP

.1Q

Phy

.1Q

Phy

.1Q

Phy

.1Q

Phy

.1Q

Phy

.1Q

Phy

IP

.1Q

Phy

VRF

CPE-lite

Either VLAN or

Ethernet Encap

Ethernet Transport Network:

MPLS/TP

QinQ imposition

PE

Decapsulate QinQ (e.g. EVC)

Encap customer VLAN according

to DC underlay

Could also be last Eth Agg

Switch

DC Underlay:

Ethernet /L3 based

QinQ imposition

vCPE

First L3 hop

vCPE on ‘on-a-stick’

PE

Terminate customer VLANs into

VRF or GRT

DC Underlay:

Same DC underlay

UNI

Reference E2E Functional Architecture for vMS/vCPE

PE/DCI UCS

CPE Management VNF Management & Service Chaining

CFS

RFS

Self Service Portal

Network Services Orchestrator

VNFs

Metro/WAN Management

Meraki

ISR, Other CPE

Meraki MX

x86

Prime

WAE

WAN/Internet

ESCService Config

Overlay SDNController

L2/L3 CPE(ISR, NID)

WAN

Orchestration

Demand

PlacementService

Assurance

Analytics

Day 0 boot-strap

Day 1/Day 2 config

Stats collection (n/w & apps)

Fault management

Customer Facing Services provide portal access to

Catalog offerings including vCPE.

Virtual Network Functions provide

CloudVPN and other NFVaaSFuture: provision SP Metro/VPN

WAN Optimization

Operations management &

Service Assurance

DevNet-1020

END-TO-END Service Orchestration

VNF

Manager(s)

Virtualized

Infrastructure

Manager(s)

Orchestrator

NFV INFRASTRUCTURE

NFV Management and

Orchestration (Mano)

VNF3 VNF2 VNF1

VNF1VNF2VNF3

EMSEMSEMS EMSEMSEMSEMS

BSS/OSS

ESC

(VNFM)

Cisco Network Services Orchestrator enabled by Tail-fVM Life-cycle

AND

Service Activation

Virtual

AND

Physical

MigrationHybrid enables migration

VMware 3rd Party

SDN

Cisco Orchestration Architecture

VNF

Manager

Service, VNF and

Infrastructure

Description

Service Catalog

Network Services Orchestrator (Based on Tail-F NCS)

VNF Library (sample list)

SP’s Existing

OSS/Catalog

OpenStack

CSR1kvCSR1kvCSR1kv

NFF

3rd Party

vNFASAvASAvASAv

QvPC SIQvPC SIQvPC SI

QvPC DIQvPC DIQvPC DI

Virtual Infra.

Managers (VIM)

NFV

Orchestrator

Service Lifecycle

ManagementService Provisioning

APICCisco Virtual Topology

Controller

(Compute and Storage VIMs)

3rd party VNFM

Cisco VNF Manager -

Elastic Services Controller

REST API

VTF

NFV

Infrastructure

(NFVI)

(Network VIMs)

Automated

Self-Service

On-Demand

Architect

It

Design

It

Where

Can We

Put It?

Procure It Install

It

Configure

It

Secure

It

Is It

Ready?

Manual

From Complexity to Simplicity and Automation

FROM WEEKS TO MINUTES*

Service

Oriented

Self-Service

Automated

Provisioning

Elasticity

(Capacity-on-Demand)

Tail-f NCS Overview

DeviceModelsNetwork Element Drivers

Device Manager

Service Manager

NSOServiceModels

Networkwide CLI and Web UIREST, NETCONF, Java, etc.

Network Engineer

ManagementApplications

NETCONF, CLI, SNMP, REST, etc.

• Logically centralized network services

• Data models for data structures

• Structure representations of:

Service instances

Network configuration and state

• Mapping service operations to network configuration changes

• Transactional integrity

• Multiprotocol support

• Multivendor support

Network Services Orchestration (enabled by Tail-f)

Provisioning1

Who configures the VM ?2

Who monitors the VM ?3

Who alerts the system when the

VM is not responding ?

Who will scale up the VM when

it’s overloaded ?

Who monitors the application

inside the VM ?4

Who alerts the system when the

VM is ready for traffic ?5

Who will restart the VM if it fails ?6

Who is keeping logs of all events

?7

Who is keeping track of

performance history ?8

Motivation for ESC

onboard

deploy

monitor

scaleHealing /

fault-recovery

update

undeploy

Elastic

Services

Controller

(ESC)

• VNF management functions include:

• Agentless VNF management (Any Vendor, Any Application, Any VNF)

• VNF lifecycle management (Create, Read, Delete)

• VNF Day0 configurations

• VM and service monitoring

• VNF Auto-healing, recovery

• Service elasticity

• VNF license management

• Multi-VIM Infrastructure support

• End to End customization support for VNF operations

• Transaction resume and rollback

• Coupled VNF management (VM Affinity, startup order, manage VM interdependency )

• Service Advertisement

Cisco ESC - VNF Lifecycle Management, Monitoring and Elasticity

VNF

• Open and Modular VNFM

• Out of the box support for new and 3rd party VNFs

• Agentless VM management and monitoring

• Customization

• across different levels of lifecycle management• Service advertisement• Monitoring• Scaling actions

• Intelligent rules based actions

• Simple and complex rules

• Works at Single VM or coupled VM level

• Transaction-level visibility, roll-back and resume operations

• Out of the box support for both VM and Service-level monitoring

ESC differentiation

Open

Customizable

Resume/Rollback

Complex actions

Coupled VM management

Agentless

Elastic

Services

Controller

(ESC)

Elastic Services Controller

Provision

VM

Create VMVM / Service

Bootstrap ProcessService

aliveVM

aliveService

Functional

ServiceOverloaded / Underloaded

VNFProvisioning

VNF MonitorVNF Configuration

Configure

Service

Service DEAD

VM DEAD

Custom Script

Action

VMOverloaded / Underloaded

Predefined Action

Custom Script

Action

Predefined Action

Custom Script

Action Predefined Action

Custom Script

Action Predefined Action

Custom Script

Action Predefined Action

Custom Script

Action Predefined Action

Analytic Engine Rule Engine

ESC

Service

KVM

Service

KVM

Service

KVM

ESC Workflow

Openstack

Hypervisor (KVM)

Host OS (Linux)

<service-request>

XML Document

ESC

KVM

Service

KVM

Service

KVM

SERVICE_NAME <request-id>

Service

KVM

Service

KVM

Service

KVM

✔✖

Service

KVM

Service

KVM

Service

KVM

Standby VM Queue

Load

Balancer

KVM

BGP

gan

glia

VTS Architecture

VCenterOpenStack3rd Party VM

Manager

REST API

Virtual Topology System

(VTS)MP-BGP

BGP-EVPN

VTFVTFOVS dVS

RESTCONF/Yang

MP-BGP

BGP-EVPNRR RR

Cisco NSO

IP / MPLS

WAN

WAN / Internet

3rd Party Cloud

Bare Metal

Workload

Virtualized

Workloads with OVS

Virtualized Workloads with Feature Rich &

High Performance Cisco VTF Solution

Virtualized

Workloads with SR-IOVVirtualized

Workloads with dVS

DCI DCI

Data Plane

Control Plane

Management &

Orchestration Plane

VTS GUI

ToR ToR

VM or

VNF

VM or

VNF

VM or

VNF

VM or

VNFVM or

VNF

VM or

VNFVM or

VNF

VM or

VNF

VM or

VNF

VM or

VNF

VM or

VNF

VM or

VNF

• Light weight, multi-tenanted software forwarder

• Industry’s first “User Space” forwarder (as VM)

• Full Fault Isolation

• No need of Kernel certification / re-certification

• No Kernel pollution – Better Stability

• Industry Highest Performance – 10G / Core

• Multi-threaded, can scale-up performance adding more cores

• Multi-Hypervisor capable, highly portable

• Highly portable VM on top of different type of Hypervisors

• Programmed by VTC using YANG over RESTConf

• Forwarding controlled centrally, L3 / L2 entries, N-tuple match

The Virtual Topology Forwarder (VTF)

VTF Control Agent

TENANT-1 VMs Proxy

ARP

DHCP

Relay

Host - (Kernel Space)

Tenant1

Context Tenant2

Context

VM2,IP2, MAC2

VM3,IP3, MAC3 VM3,IP3, MAC3

VM2,IP2, MAC2

MPLS-over-GRE / VXLAN /MPLS-over-UDP / L2TPv3

VTF VM

Data Plane

Patch Panel

TENANT-2 VMs

DPDK Drivers

Physical NIC

User Space

Data Model of VTF has been published at IETF - http://tools.ietf.org/html/draft-rfernando-ipse-01

Network Services Orchestrator (NSO)PnP Server

CloudVPN with ISR CPE Use Case

Elastic Services

Controller (ESC)

Tenant Portal

REST API REST API

SP’s OSS/BSS

ISR CPE

PnP Functionality

Zero Touch Provisioning

OpenStack

X86 S

erv

er

CloudVPN Connectivity up

Provision

CSR

ISR CPE Shipped to Customer

Site, connected & Powered ON

Customer Orders VPN Service

Provide Day 1

Configuration

Establish VPN: IPSec, IP Overlay

(VXLAN, GRE, LISP), L2

DCI/PE

CSR1Kv

Spin up CSR

Adding VNFs in the cloud

Elastic Services

Controller (ESC)

Tenant Portal

Network Services Orchestrator (NSO)

REST API REST API

SP’s OSS/BSS

ISR CPE

PnP Functionality

Zero Touch Provisioning

OpenStack

CSR1Kv ASAv

X86 S

erv

er

Internet

Gateway

vESA

CloudVPN Connectivity up

If more VNFs are needed

for a Service Chain ?

More scalable and flexible

service chaining enabled with

VTC & high-performance VTF

ISR CPE Shipped to Customer

Site, connected & Powered ON

Customer Orders VPN Service

Provide Day 1

Configuration

Virtual Topology

Controller (VTC)

OVS/V

TF

Establish VPN: IPSec, IP Overlay

(VXLAN, GRE, LISP), L2

PnP Server

DCI/PE

Conforming to ETSI NFV stack

Computing

Hardware

Storage

Hardware

Network

Hardware

Hardware resources

Virtualisation LayerVirtualised

Infrastructure

Manager(s)

VNF

Manager(s)

VNF 2

Orchestrator

OSS/BSS

NFVI

VNF 3VNF 1

Virtual Computing Virtual Storage Virtual Network

NFV Management and Orchestration

EMS 2 EMS 3EMS 1

Service, VNF and Infrastructure Description

Or-Vi

Or-Vnfm

Vi-Vnfm

Os-Ma

Se-Ma

Ve-Vnfm

Nf-Vi

Vn-Nf

Vl-Ha

Domain

Orchestration

VIM

Cross Domain Orchestrator

VNF VNF VNF

Physical and Virtual Infrastructure

Compute – Storage - Network

NSO enabled by Tail-f

VNFM

ESC

• OpenStack

• Vmware

• Hypervisor

• Containers

Service Broker

Prime Fulfillment Prime Service Catalog

Custom Portals

Responsible for:

the life cycle management of

VNF

Monitoring

Elasticity

VNF Configuration and

Healing of VNF during Fault

conditions

Virtual Network Functions Manager

Virtualizing BRAS, LAC, LNS or Route Reflectors

WAN

Differences between Cloud and Branch NfV Use-Cases

• Focus on cloud orchestration and virtualization features

• Mix of applications and VNFs may be hosted in the cloud

• Horizontal scaling -> smaller VM footprints

• Dynamic capacity & usage- / term-based billing

• Focus on replacing hardware-based appliances

• Typically smaller x86 processing capacity in the branch

• NfV applications (Firewall, NAT, WAAS..) may consume large proportion of available hardware resources

• larger VM footprints

• Cloud orchestration and automation has to be distributed over all branches

• integration with existing OSS desirable for migration

UCS

VDI VDI

DB

ERP

Win WinBRAS

UCS

IPSBRAS

WAN

DC

Branch

UCS

VDI VDI

DB

ERP

Win WinBRAS

UCS

VDI VDI

DB

ERP

Win WinBRAS

63

• Industry’s first full featured virtual BNG (PTA/LNS) solution with scale and performance

• CSR 1000v leverages IOS XE code-base from ASR 1000

• PTA / LNS features are part of the code base

• Targets smaller scale deployments less than 8K sessions per virtual instance

• Targeted for selective PTA (PPPoE) and LNS deployment profiles

CSR 1000v as vPTA / vLNS

VMs

SP Aggregation

Customer

Premise

SP Core

Data CentervPTA

vLNS

VMs

• Virtual Intelligent Services Gateway (vISG) that can be deployed as access gateway for hospitality environments, providing the same subscriber management functionality (IPoE) currently offered by ASR1000

• Targets smaller scale deployments less than 8K sessions per virtual instance

• vISG session creation FSOL –Unclassified MAC.

CSR 1000v as vISG

Data Center

vISG

Indoor Hotspot

Residential / Community Wi-

Fi

Metro Wi-Fi

Wi-Fi Access

CSR1000v vBNG supported Profiles

Profile vPTA vLNS vISG

Session Type PPPoEoVLAN PPPoVLANoL2TP IPoEoVLAN

Features* Input/output ACL, ingress QoS

(policing) / egress QoS

(shaping), vrf-awareness,

IPv4/IPv6 dual-stack, AAA,

ANCP

IPv4/IPv6, HQoS, Input/output

ACL, dual-stack service and TC

accounting, CoA Service Push

DHCP, Unclassified MAC,

HQoS, Input/output ACL, ISG

TC, L4R, PBHK,

Unauthenticated timeout

vCPU 2 vCPU

Memory 8GB

Sessions 8k 8k 8k

66

CSR1000v for Route Reflector

ASR1001 &

ASR1002-X

(8GB)

ASR1001 &

ASR1002-X

(16GB)

CSR1000v

(8GB)

CSR1000v

(16GB)

RP2 (8GB) RP2 (16GB)

ipv4 routes 7M 13M 8.5M 24.8M 8M 24M

vpnv4 routes 6M 12M 8.1M 23.9M 7M 18M

ipv6 routes 6M 11M 7.4M 21.9M 6M 17M

vpnv6 routes 6M 11M 7.3M 21.3M 6M 15M

BGP sessions 4000 4000 4000 4000 8000 8000

• CSR 1000v leverages IOS XE code-base from ASR 1000 Route Reflector features are thus part of the code base

VMs

SP Aggregation

Customer

Premise

SP Core

Data Center

vRR

IOS XE 3.13

July 2013

67

Conclusion

Summary – what we talked about Today

This session reviewed the

• CSR 1000v System Architecture

• vCPE Network Architectures and the vMS Solution

• Virtualizing BRAS, LAC, LNS or Route Reflectors

Key Conclusions

• Virtualization is maturing fast and enabling new architectural variations

• CSR 1000v is able to meet SP requirements for virtualization from a feature-richness and performance perspective

• vCPE architectures are enabled by Cisco using the vMS solution, where the CSR 1000v offers virtualized CPE functionality in the cloud combined with orchestration

• The virtualized IOS XE of the CSR 1000v enables other NfV use-cases like vBRAS, vLNS and thus enables different architectures

• Virtualization is about changing the architecture, not simply replacing a hardware system with a software system

• Increased focus on automation and orchestration

Participate in the “My Favorite Speaker” Contest

• Promote your favorite speaker through Twitter and you could win $200 of Cisco Press products (@CiscoPress)

• Send a tweet and include

• Your favorite speaker’s Twitter handle <Speaker—enter your Twitter handle here>

• Two hashtags: #CLUS #MyFavoriteSpeaker

• You can submit an entry for more than one of your “favorite” speakers

• Don’t forget to follow @CiscoLive and @CiscoPress

• View the official rules at http://bit.ly/CLUSwin

Promote Your Favorite Speaker and You Could Be a Winner

Complete Your Online Session Evaluation

Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online

• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card.

• Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.

Continue Your Education

• Demos in the Cisco campus

• Walk-in Self-Paced Labs

• Table Topics

• Meet the Engineer 1:1 meetings

• Related sessions

Visit us at the Customer Confessional

How would you fill in the blanks?

• I wish Cisco would ______.

• Cisco has exceeded my expectations by doing ______.

• If Cisco’s products could _____ it would be ______.

We want to hear from you! Find us in room 71 of the MTE!

Thank you