Network Function Virtualisation for Enterprise Networksd2zmdbbm9feqrf.cloudfront.net/2016/anz/pdf/BRKCRS-3447.pdf · Network Function Virtualisation for Enterprise Networks ... •

Embed Size (px)

Citation preview

  • Network Function Virtualisation for

    Enterprise NetworksJames Sandgathe - Engineer, Technical Marketing

    Enterprise Infrastructure and Solutions Group

  • Abstract

    Network Function Virtualisation (NfV) is gaining increasing traction in the industry based on the promise of reducing both CAPEX and OPEX using COTS hardware. This session introduces the use-cases for virtualising Enterprise network architectures, such as virtualising branch routers, LISP nodes, IWAN deployments, or enabling enterprise hybrid cloud deployments. The sessions also discusses the technology of Virtualisation from both a system architecture as well as a network architecture perspective. Particular focus is given on understanding the impact of running routing functions on top of hypervisors, as well as the placement and chaining of network functions. Performance of virtualisedfunctions is also discussed.

  • Introduction & Motivation

    Deployment Models and Characteristics

    The Building Blocks of Virtualisation

    Introducing Enterprise NFV

    Demonstration - NFVIS Orchestration

    Demonstration ESA Orchestration

    Conclusion

    Agenda BRKCRS-3447

  • Some additional points

    Cisco launches Enterprise NFV

    http://www.cisco.com/go/enfv

    Enterprise NFV Technical Whitepaper

    http://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/enterprise-network-functions-virtualization-nfv/white-paper-c11-736783.html?cachemode=refresh

    http://www.cisco.com/go/enfvhttp://www.cisco.com/c/en/us/solutions/collateral/enterprise-networks/enterprise-network-functions-virtualization-nfv/white-paper-c11-736783.html?cachemode=refresh

  • Some additional points

    Two new sessions are added at CiscoLive Las Vegas 2016

    BRKCRS-2006 2 Hour Breakout

    TECCRS-3006 8 Hour Deep Dive Tectorial and Hands On Lab

    These new sessions will focus on the Enterprise NFV solution

  • Introduction and Motivation

  • Network Functions Virtualisation (NFV)

    7

    Announced at SDN World Congress, Oct 2012

    AT&T

    BT

    CenturyLink

    China Mobile

    Colt

    Deutsche Telekom

    KDDI

    NTT

    Orange

    Telecom Italia

    Telstra

    Verizon

    Others TBA

  • What is NFV? A Definition

    NFV decouples the network functions such as NAT, Firewall, DPI, IPS/IDS, WAAS, SBC, RR etc. from proprietary hardware appliances, so they can run in software. ..It utilises standard IT virtualisation technologies that run on high-volume servers, switch and storage hardware to virtualise network functions....It involves the implementation of network functions in software that can run on a range of industry standard server hardware, and that can be moved to, or instantiated in, various locations in the network as required, without the need for installation of new equipment.

    Sources:

    https://www.sdncentral.com/which-is-better-sdn-or-nfv/

    http://portal.etsi.org/nfv/nfv_white_paper.pdf

    Service

    Orchestration

    NFVSDN X86

    compute

    https://www.sdncentral.com/which-is-better-sdn-or-nfv/

  • Motivation for Virtualising Network Functions

    CAPEX

    Deploy on standard x86 servers

    Economies of scale

    Service Elasticity

    Simpler architectural paradigm

    Changes in management access?

    Changes in HA?

    Best-of-breed

  • Motivation for Virtualising Network Functions

    OPEX

    Reduction of number of network elements

    Reduction of on-site visits

    Leveraging Virtualisation benefits

    Hardware oversubscription, vMotion, ..

    Increased potential for automated network operations

    Re-alignment of organisational boundaries

  • Deployment Models and Characteristics

  • Virtualisation Architecture TaxonomyClassifying the Architecture

    What type of function in the network is being virtualised?

    Where does the function reside?

    How is the function hosted?

  • Virtualisation Architecture TaxonomyType of Functions

    Control plane

    Network policy

    Orchestration and Management

    Data/Forwarding plane

    Routing

    Packet diversion/service chaining

    L3-L7 Services

    DPI, NAT, Compression

    Enterprise

    Network

    Virtualisation

    Network

    Control

    Transport

    Network

    Functions/S

    ervices

  • Virtualisation Architecture TaxonomyPlacement and Location

    Mid to Large number of instances

    High traffic volumes

    Two to fifty or more

    Potentially large number of locations

    Low to mid traffic volume

    Tens to tens of thousand

    Any number of virtual instances

    Many traffic volumes

    Location agnostic

    Enterprise

    Network

    Virtualisation

    Data

    Centre/Cam

    pus

    Branch

    Cloud

  • Virtualisation Architecture TaxonomyHosting of Functions

    Enterprise

    Network

    Virtualisation

    Data

    Centre/Cam

    pus

    Branch

    Cloud

    Virtual Machines

    Application/Linux container

    End user does not see HW

    Virtual Machines and containers

    Blade server clusters and high density chassis servers

    On-Premise Private Cloud

    Virtual Machines

    Network element hosting/appliances

    General purpose servers

  • Virtualisation ArchitectureDifferences In Data Centre and Branch

    Data Centre Branch Site

    Runs 10s of thousands of VMs

    Server hardware is high-end compute Fast storage using SAN and SAN switching

    VM = 4C/12GB/100GB HDD

    5% headroom = 2,000 VMs

    Runs 2 to 6 VM

    Server hardware lower end chassis

    Internal storage, possibly RAID

    VM = 1C/8GB/250GB HDD

    5% headroom = 1-2 VMs

  • Virtualisation ArchitectureCost Impact to Scaling Compute

    5

    10

    15

    20

    25

    DRAM Cost Per GB

    DIMM

    Cost

    8GB

    $11

    16GB

    $13

    32GB

    $16

    64GB

    $26

  • Virtualisation ArchitectureCost Impact to Scaling Compute

    50

    100

    150

    200

    Cores per socket

    Cost per core

    6C

    $72

    8C

    $79

    12C

    $130

    16C

    $199

    18C

    $229CPU Cost Per Core

  • Virtualisation ArchitectureCost Impact to Scaling Compute

    50

    100

    150

    200

    1TB

    $219

    2TB

    $180

    3TB

    $160

    4TB

    $113

    HDD Size

    Cost per TB

    HDD Cost Per TB

  • Virtualisation of Control Plane Functions

  • Shared Services

    Enterprise Virtualisation ModelsNetwork Control Plane Functions

    Virtualisation of Control plane functions

    Route Reflectors

    PfR MC

    LISP MS/MR

    WLC

    Can be on-premise or in larger Enterprise WAN PoPs or in the cloud

    Assuming VNFs are reachable by IP

    CSR 1000v offers functional and operational consistency

    Virtualised IOS XE

    WAN

    Campus

    vWLC vRR

    vMS/MR vMC

  • Example: vRR with CSR 1000v

    CSR 1000v offers full IOS XE route-reflector functionality

    ASR1001 &

    ASR1002-X

    (8GB)

    ASR1001 &

    ASR1002-X

    (16GB)

    CSR1000v

    (8GB)

    CSR1000v

    (16GB)

    RP2 (8GB) RP2 (16GB)

    ipv4 routes 7M 13M 8.5M 24.8M 8M 24M

    vpnv4 routes 6M 12M 8.1M 23.9M 7M 18M

    ipv6 routes 6M 11M 7.4M 21.9M 6M 17M

    vpnv6 routes 6M 11M 7.3M 21.3M 6M 15M

    BGP sessions 4000 4000 4000 4000 8000 8000

    VMs

    SP Aggregation

    Customer

    Premise

    SP Core

    Data Center

    vRR

  • Cloud Virtualisation

  • Application Visibility in the Public Cloud

    Cloud network enhanced by sophisticated routing functionality

    Secure connectivity to cloud (encryption)

    VPC to VPC connectivity

    Application Visibility

    WAAS

    VPCs are part of enterprise network

    End-to-end Cisco network (including

    AWS Cloud)

    Application Visibility

    Remote Sites

    & Employees

    Enterprise

    Data Center

    Public

    InternetVPC2

    VPC1

    VPCs are part of enterprise network

    End-to-end Cisco network (including

    AWS Cloud)

    Application Visibility

    https://commons.wikimedia.org/wiki/File:Office_building_icon.pnghttps://commons.wikimedia.org/wiki/File:Office_building_icon.png

  • WAN

    DC

    Campus

    WAN

    DC

    Campus

    Branch Virtualisation: Cloud Options

    L2 Private-cloud Branch 1:1

    Small branches with low throughput and

    no WAAS, Encryption, HA requirements

    Switch: transport, Storm control, L2 COS

    Routing & Services: done in PoP or in SP

    DC running on UCS (at PoP or in DC)

    Single tenant, but optionally single-or multi-

    site

    Routing, QoS,

    FW, NAT..

    Branch

    Branch Routing, QoS, FW, NAT..

    F/D

    L3 Private-cloud Branch 1:1

    L3 router remains in branch but performs

    minimal functions

    L4-7 services virtualised in the private

    cloud

    Branch router tightly coupled with virtual

    router in the private cloud for services

    Routing, QoS,

    FW, NAT..

    Branch

    Branch FW, NAT..

    F/D

    4

    5

    Suitability for applications with stringent

    bandwidth / delay / jitter requirements?

  • Virtualising Branch Functions

  • Virtualisation of Branch Functions

    Current Branch infrastructure often contains physical appliances that complicate architecture

    Typical Appliances vary by branch size

    Remote office (1-5 users): firewall

    Small (5-50 users): switched infrastructure, small call control, firewall, IPS/IDS

    Medium (50-100 users): redundancy, local campus, call control, firewall, IPS, IDS, WAAS

    Large (100+ users): redundancy, local campus, call control, firewall, IPS, IDS, WAAS

    In addition to end-points (Phones, Printers, local storage)

    WAN

    Campus /

    DC

    Branch

    CUBECUBE

    Branch Appliances

    Router: Routing, ACL, NAT, SNMP..

    Switch: port aggregation

    Services realised with appliances

    Full redundancy

    Could be multi-vendor (Best of breed)

    Fib/DSL/Cab.

    Fib/DSL/Cab.

  • Branch Virtualisation On premise Options

    BranchFully virtualised Branch

    Physical router replaced by x86 compute

    Both transport and network services virtualised

    VNFs could be multi-vendor (Best of breed) F/D

    3WAN

    1Branch Router + integrated L4-7 services

    E.g. ISR + UCS-E

    Router performs transport functions

    Services (Firewall, WAAS..) virtualised on UCS-E

    F/D

    WAN

    Branch Router + virtualised L4-7 services

    Router performs transport functions (Routing, ACL,

    NAT, SNMP..)

    Services virtualised on external server

    VNFs Could be multi-vendor (Best of breed)

    F/D2WAN

  • The Building Blocks of Virtualisation (Today)

  • ETSI NfV Reference Architecture

    30

    Execution reference points Main NFV reference pointsOther reference points

    Computing

    Hardware

    Storage

    Hardware

    Network

    Hardware

    Hardware resources

    Virtualisation LayerVirtualised

    Infrastructure

    Manager(s)

    VNF

    Manager(s)

    VNF 2

    OrchestratorOSS/BSS

    NFVI

    VNF 3VNF 1

    Virtual

    ComputingVirtual Storage

    Virtual

    Network

    NFV Management and

    Orchestration

    EMS 2 EMS 3EMS 1

    Service, VNF and Infrastructure

    Description

    Or-Vi

    Or-Vnfm

    Vi-Vnfm

    Os-Ma

    Se-Ma

    Ve-Vnfm

    Nf-Vi

    Vn-Nf

    Vl-Ha

    Management

    Orchestration

    Virtual Network Functions

    Hypervisor

    Compute Hardware

  • Architecture Building Blocks Enterprise Virtualisation

    Orchestration and Management

    Virtual Network Functions

    Virtual Routers, Firewalls, NATs

    Hypervisors / Containers

    A transport network

    Physical Hardware

    X86 servers

    Virtualisation-capable routers

    Service Chaining (Optional)

    Branch 1

    Policy

    Orchestration & Management

    PHY PHYHost OS

    VM1 VM2

    PnP

    LCMHypervisor

    VSwitc

    h

    DC

    WAN Branch N

    VMx

  • Virtual Network Functions

  • Management

    & Orchestration

    Voice &

    Video

    Security

    Network

    Infrastructure

    Available VNFs from Cisco for Enterprise (Sample)

    Deep Packet

    Inspection

    (CSR1Kv)

    Web Security

    (vWSA)

    E-Mail Security

    (vESA)

    Identity Services

    Engine

    (vISE)

    DMVPN

    (CSR1Kv)

    SSL VPN

    (CSR1Kv)

    Virtual ASA

    Firewall

    (ASAv)

    NAT

    (CSR1Kv)

    Virtual Zone

    Based Firewall

    (CSR1Kv)

    IPSec and SSL

    VPN

    (ASAv)

    vNGIPS

    (SourceFire)

    IPSec VPNs (Flex,

    Easy, GET)

    (CSR1Kv)

    Virtual Router

    CE / CPE

    (CSR1Kv)

    Nexus 1000V

    Virtual

    Route Reflector

    (CSR1Kv, XRv)

    CML / VIRL

    Wireless LAN

    Controller

    (WLC/MSE)

    Network Analysis

    Module (NAM)

    Wide Area

    Application Service

    (WAAS)

    AppNav and AVC

    (CSR1Kv)

    DHCP

    (CSR1Kv)

    IP SLA

    (CSR1Kv)

    VXLAN (L2,L3),

    OTV, VPLS, LISP

    (CSR1Kv)

    Virtual

    PE/ IP Router

    (CSR1Kv)

    Cisco VDS-IS

    Cisco Unified

    Coms Manager,

    Presence, Unity

    Unified Contact

    Center, CC

    Express

    CUBE

    (CSR1Kv)

    Roadmap

    Video

    Conferencing

    (MSE8K)

    Enterprise Network

    Controller (APIC-

    EM)

    Prime Performance

    Manager, Prime

    Analytics

    Prime Network

    Registrar, IP

    Express

    Prime

    Access Registrar

    Prime Fulfillment,

    Order Fulfillment

    Prime Home

    Cisco Prime

    Infrastructure,

    Provisioning

    Prime

    Collaboration

    Prime Network

    Service ControllerUCS Director

    Prime Service

    Catalog

    Intelligent

    Automation for

    Cloud (IAC)

  • Cisco Virtual Network Functions

    Adaptations from physical systems / solutions

    Feature and operational consistency between physical and virtual systems

    E.g. CSR 1000v and ASR 1000 / ISR 44xx are all based on the SAME IOS XE

    Exposure of APIs (REST)

    Flexible Licensing models (perpetual, Smart Licensing, Cisco ONE)

    Flexible Performance

    ASAv: {100Mbps, 1Gbps, 2Gbps}

    CSR 1000v: {10Mbps, 50Mbps, 100Mbps, 250 Mbps, 500Mbps, 1Gbps, 5 Gbps, 10Gbps}

    WAAS: {200, 750, 1300, 2500, 6000, 12000, 50000}

  • Cisco CSR 1000V Virtual IOS XE NetworkingCisco IOS Software in Virtual Form-Factor

    IOS XE Cloud Edition

    Selected features of IOS XE based on targeted use cases

    Infrastructure Agnostic

    Not tied to any server or vSwitch, supports ESXi, KVM, Xen, AMI

    Throughput Elasticity

    Delivers 10Mbps to 20 Gbps throughput, consumes 1 to 8 vCPU

    Multiple Licensing Models

    Term, Perpetual

    Programmability

    RESTful APIs for automated management

    Virtualised Networking with Rapid Deployment and Flexibility

    Server

    Hypervisor

    Virtual Switch

    VPC/ vDC

    OS

    App

    OS

    App

    CSR 1000V

  • Introducing vCUBE (CUBE on CSR 1000v)Architecture

    CSR (Cloud Services Router) 1000v runs on a Hypervisor IOS XE without the router

    Console Mgmt ENET Ethernet NICsFlash / DiskMemoryVirtual CPU

    RP (control plane)

    Chassis Mgr.

    Forwarding Mgr.IOS-XE

    Kernel (incl. utilities)

    ESP (data plane)

    Chassis Mgr.

    Forwarding Mgr.

    QFP Client / Driver

    FFP code

    Hypervisor

    Hardware

    vSwitch NIC

    GE GEX86 Multi-Core CPU Memory Banks

    Virtual Container

    CUBE signaling CUBE media processing

    CSR 1000v (virtual IOS-XE)

  • Hypervisors

  • CSR 1000v and Hypervisor Processing Relationships

    Example: 3 CSR VMs scheduled on a 2-socket 8-core x86

    Different CSR footprints shown

    Type 1 Hypervisor

    No additional Host OS represented

    HV Scheduler algorithm governs how vCPU/IRQ/vNIC/VMKernelprocesses are allocated to pCPUs

    Note the various schedulers

    Running ships-in-the-night

    X86 Server

    Pro

    ce

    ss

    Qu

    eu

    e

    HV Scheduler

    Socket0

    pCPU1 pCPU2 pCPU3 pCPU4

    pCPU5 pCPU6 pCPU7 pCPU8

    vCPU12

    vCPU03

    vNICn2

    VM Kernel1

    Socket1

    pCPU1 pCPU2 pCPU3 pCPU4

    pCPU5 pCPU6 pCPU7 pCPU8

    vSwitch

    HV Kernel

    VM1(4vCPU CSR 1000v)

    CS

    RIOSFman /

    CMan

    PP

    EHQF Rx

    vCPU01 vCPU1

    1 vCPU31

    IRQ1 vNIC11 VM Kernel1

    PP

    E

    vCPU21

    vNICn1

    Guest OS Scheduler

    Pkt Scheduler

    VM2(1vCPU CSR 1000v)

    vCPU02

    IRQ2 vNIC12 VM Kernel2vNICn

    2

    CS

    RIOSFman /

    CMan

    PP

    EHQF Rx

    Guest OS Scheduler

    Pkt Scheduler

    VM3 (2vCPU CSR 1000v)

    vCPU03

    IRQ3 vNIC13 VM Kernel3vNICn

    3

    CS

    RIOSFman /

    CMan

    PP

    EHQF Rx

    Guest OS Scheduler

    vCPU13

    Pkt Scheduler

  • Virtual Switches / Bridges

    Virtual switches ensure connectivity between physical interfaces and Virtual Machines

    Can have multiple vSwitches per host

    May have L2 restrictions (some vSwitches are switches in name only)

    May impact performance

  • I/O Architecture

  • Hypervisor virtualises the NIC hardware to the multiple VMs

    Hypervisor scheduler responsible for ensuring that I/O processes are served.

    There is a single instance of physical NIC hardware, including queues, etc.

    Many to one relationship between the VMs vNICand the single physical NIC

    One vHost/VirtIO thread used per configured interface (vNIC)

    May become a bottleneck at high data rates

    Virtualising I/O KVM Architecture Example

    x86 Host (KVM)

    Ho

    st K

    ern

    el

    pNIC

    pNIC Driver

    Virtual Switch / Linux Bridge

    Tap Tap

    Use

    r S

    pa

    ce

    (QE

    MU

    )

    vNIC (vHost) vNIC (vHost)

    Gu

    est

    VM1

    I/O driver (e.g. VirtIO)

    Application

    VM2

    I/O driver (e.g. VirtIO)

    Application

    Packet Copy

    Packet Copy

    Packet Copy Packet Copy

    Packet Copy

    Packet Copy

    Pkt Pkt Pkt

    Pkt Pkt Pkt

  • I/O Optimisations: Direct-map PCI (PCI pass-through)

    Physical NICs are directly mapped to a VM

    Bypasses the Hypervisor scheduler layer

    PCI device (i.e. NIC) no longer shared among VMs

    Typically, all ports on the NIC are associated with VM

    Unless NIC supports virtualisation

    Caveats:

    Limits the scale of the number of VMs per blade to number of physical NICs per system

    Breaks live migration of VMs

  • I/O Optimisations: Single Root IO Virtualisation - SR-IOV with PCIe pass-through

    Allows a single PCIe devices to appear to be multiple separate PCIe devicesNIC supports virtualisation

    Enables network traffic to bypass software switch layers

    Creates physical and virtual functions (PF/VF)PF: full featured PCIe

    VF: PCIe without configuration resources

    Each PF/VF gets a PCIe requestor ID s.t. IO memory management can be separated between different VFs

    Number of VFs dependent on NIC (O(10))

    Ports with the same (e.g. VLAN) encap share the same L2 broadcast domain

    Requires support in BIOS/Hypervisor

  • Enterprise NFV

  • DEMONSTRATION: ESA

    45

  • The Current Enterprise Branch Landscape

    Multiple DevicesRouters, Appliances, Servers

    Costly to OperateUpgrades, refresh cycles,

    site visits

    Difficult to ManageDevice integration and

    operation

    Horseman of the branch apocalypse

  • What can the system do for me?

  • Orchestration &

    Automation

    What if a new attacker threatened the business

    Route

    vnet

    Route

    vnet

    Route

    Office

    Office

    Office

    WLC FW/IPS

    WLC

    FW/IPS

    FW/IPS

    vnet

    vnet vnet

    vnet vnet

    vnet vnet

    vnet

    But a new defense network can be up in minutes

    Everywhere at once

  • How does it make my life simpler?

  • Its simple really .

    NIC NIM BMC Switch

    X86 Processor

    Life Cycle MGT Automation Policy

    Enforcement

    Virtualization Layer - KVM

    Operating System

    Router

    Firewall

    Wireless

    WAN Opt

    Proxy/Cache

    WAN-Opt

    vAPP WLC

    Route/Path

    Selection

    FW/IDS

    NIC NIM BMC Switch

    X86 Processor

    Life Cycle MGT Automation Policy

    Enforcement

    Virtualization Layer - KVM

    vAPP

    Operating System

    Here is your Branch

    on Hardware

    This is your Branch with

    Cisco Enterprise NFV

  • So why not just put a server at the branch and be done with it?

  • 1. VMWare vCenter sends packet from central location (East coast)

    2. Packet carried over MPLS (VZ) to store (Sunnyvale Lab)

    3. Physical Ethernet connected to switch and frame forwarded to

    VMWare Distributed vSwitch (DvSW)

    4. DvSW forwards frame to CSR

    5. CSR removes MPLS label and forwards to DvSW

    6. Forwarded from DvSW to Juniper SRX FW

    7. FW forwards to DvSW for VMKernel going out to EX and back

    8. Packet arrives at VMKernel

    DvSW-1

    VMWare ESXi

    CSR FW

    10.16.48.1GE1.114 10.16.48.21

    UCS 240

    Carrier 1 VZ

    L2 VLAN

    VMKernel Port

    vCenter

    1

    2

    3

    45

    6

    8

    7

    Managing the Hypervisor

    GE2.1 17.16.8.26

    LAN SW

  • DvSW-1

    VMWare ESXi

    CSR FW

    10.16.48.1GE1.114 10.16.48.21

    UCS 240

    Carrier 1 VZ

    L2 VLAN

    VMKernel Port

    vCenter

    1

    2

    3

    45

    6

    8

    7

    Managing the Hypervisor

    GE2.1 17.16.8.26

    Change in the Port Channel or VLAN

    Change in the CSR

    While changes were made to the FW, VLAN assignments, CSR, or FW connectivity

    to/from vCenter gets lost and begins to flap

    vCenter sometimes misses confirmation of changes made

    This is a issue since management of the hypervisor becomes dependent on the

    stability of the VMs running in it

    One /30 from each

    carrier for the WAN

    circuit

    Change in the FW

    LAN SW

  • Managing the Hypervisor

    Virtualisation evolved as a DC

    technology where high speed,

    near zero latency, and straight

    IP access existed between the

    management console and the

    hypervisor instance

    EX3300

    Carrier 2 ATT

    DvSW-1

    VMWare ESXi

    CSR FW

    10.16.48.1 GE1.114 10.16.48.21

    UCS 240

    Carrier 1 VZ

    L2 VLAN

    VMKernel Port

    vCenter

    1

    2

    3

    45

    6

    8

    7GE2.1 17.16.8.26

    Change in the Port Channel or VLAN

    Change in the CSR

    One /30 from each carrier for the WAN

    circuit

    Change in the FW

    Applying this to the WAN causes

    it to break since managing the

    hypervisor is dependent a VM

    and its stability

    This is a fundamental

    flaw in the

    architecture of

    virtualisation

  • Enterprise NFV Solution ArchitecturePhase 1

    Various Host

    options for different

    Branch Sizes

    Software host

    managing

    virtualisation and

    hardware

    VNF and Application

    hosting with 3rd

    party support

    Common

    Orchestration and

    Management across

    virtual & physical

    network

    ISR-4K + x86 on

    UCS-E

    UCS x86

    Server

    NFVIS

    ISRv ASAv WAAS vWLC3rd

    VNFnApp1 Appn

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    ManagementHypervisor

    Virtual

    Switching

    App1

    NFVIS = Network Function Virtualisation Infrastructure Software

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    Various Host options

    for different Branch

    Sizes

    Software host

    managing

    virtualization and

    hardware

    VNF and Application

    hosting with 3rd

    party support

    Common

    Orchestration and

    Management across

    virtual & physical

    network

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    Cisco supplied software 3rd party supplied software

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    Various Host options

    for different Branch

    Sizes

    Software host

    managing

    virtualization and

    hardware

    VNF and Application

    hosting with 3rd

    party support

    Common

    Orchestration and

    Management across

    virtual & physical

    network

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    Cisco supplied software 3rd party supplied software

  • Upload Devices to

    be shipped

    Upload the Branch

    locations

    Custom Design a

    Profile

    Map to

    Branch(s)Associate the

    templates & attributes

    Pick validated

    topologies

    Select functions

    1 2

    3

    56

    7

    4

    Branch Profile DesignEnterprise Service Automation

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Orchestration & Management Day 0/1

    WAN

    Office

    IP

    NFVIS

    IPS

    WAAS

    vSwitch

    APIC-EM Prime Infrastructure PnP Day 0/1 config

    repository

    ESC-Lite

    Enterprise Services Automation (ESA)

    SN

    , IP fo

    r hostP

    rofile

    to S

    N

    mappin

    g

    Pro

    vis

    ionin

    g

    Pro

    vis

    ionin

    g REST

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

    Prime Infra APIC-EMDay 0 Config

    Repository

    PnP

    Server

  • WAN

    Office

    IP

    NFVIS

    IPS

    WAAS

    vSwitch

    APIC-EM Prime Infrastructure PnP Day 0/1 config

    repository

    ESC-Lite

    Enterprise Services Automation (ESA)

    Day 2 Element Management (Config changes, Fault monitoring etc) done by PI, APIC-EM, and VNF-specific element managers (in case of 3rd party or if the VNF is not supported by PI)

    ESA plays no role in day 2 operations

    Orchestration & Management Day 2

    Ele

    ment

    Managem

    et

    WCMCSM

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

    Prime Infra APIC-EMDay 0 Config

    Repository

    PnP

    Server

  • ASAv/FTD vWAAS vWLCISRv

    Best-of-breed Trusted Services from CiscoConsistent software across physical and virtual

    High Performance

    Rich Features

    End-to-end Support

    Proven Software

    Application Optimisation

    Superior Caching with

    Akamai Connect

    Survivability & Scale

    Consistency across the

    Data Center and Switches

    Built for small and medium

    branches

    Comprehensive Protection

    Full DC-class Featured

    Functionality

    Designed for NFV

    Cost-effective with NFV

    * FirePOWER Threat Defense for ENFV June/July 2016

    *

    Windows 2012 and Linux Server also supported

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Optimised for Network ServicesEnterprise NFV Infrastructure Software (NFVIS)

    Network Hypervisor

    Enables segmentation of virtual networks

    Abstract CPU, memory,

    storage resources

    Zero Touch Deployment

    Automatic connection to PnP server

    Secure connection to the orchestration system

    Easy day 0 provisioning

    Life Cycle Management

    Provisioning and launch of VNFs

    Failure and recovery monitoring

    Stop and restart services

    Dynamically add and remove services

    Service Chaining

    Elastic service insertion

    Multiple independent service paths based on applications or

    user profiles

    Open API

    Programmable API for service orchestration

    REST and NETCONF API

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Virtualisation

    NFVISThe POWER under the hood

    Network Function Virtualisation Infrastructure Software

    API

    Interface

    Platform Management KVM

    Virtualized Service

    Virtualized Service

    Virtualized Service

    vSwitch

    Linux

    PnP

    Client br2 br1

    Int-1 Int-2 Int-3

    Kernel Virtual Machine (KVM) to abstract service functions from hardware

    Virtual switching provides connectivity between service functions and to physical interfaces

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • REST (HTTPS) and NETCONF (SSH)

    NFVISThe POWER under the hood

    Network Function Virtualisation Infrastructure Software

    API

    Interface

    Platform Management KVM

    Virtualized Service

    Virtualized Service

    Virtualized Service

    vSwitch

    Linux

    PnP

    Client br2 br1

    Int-1 Int-2 Int-3

    Register and deploy services

    Configure platform

    Gather monitoring statistics

    PnP client for ZTD

    Platform Management

    Controlling hardware specifics such as storage, memory, network interface connectivity

    Health monitoring

    Hardware performance such as SR-IOV

    VF

    PF

    PF = Physical Function

    VF = Virtual Function

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • VNF Lifecycle Management Agent + Programmable APIs

    WAN/LAN Network Interfaces

    Linux

    x86 + HW Accelerators

    NFV Platform (Cisco/3rd Party)

    Hypervisor

    Security (Secure Boot/

    code signing) Licensing

    Platform Hardware drivers

    Hardware Accelerator

    SDK

    vSwitch

    Platform Initialization

    Software

    PnP client

    Service assurance

    agents

    Interface Drivers

    NFV Infrastructure

    (NFVI)

    Compute Resources

    Virtual Network

    Functions

    NFVIS

    MANO Agents

    Server Management

    functions

    Local WebUI

    Cisco VNF 3rd Party VNF Cisco VNF 3rd Party VNF Cisco VNF

    Enterprise NFV local management capabilities

    Components:

    Local GUI, VM Life-cycle Manager

    Local PnP Agent

    Useful if WAN connectivity is unavailable

    For small deployments

    NFVIS Local ManagementThe POWER under the hood

    All controls written using public APIs!!

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Solution supports different form-factors and resources to meet varying demands

    Provide the physical resources for NFVIS, VNFs and applications.

    Enterprise NFV solution runs on x86 based host

    UCS-E

    Cisco UCS-C

    Enterprise NFV Scalable Services Compute Platforms

    UCS C-Series

    ISR-4K with

    UCS E- Series

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Designed for a wide range of workloads

    Dense 1RU modular general compute platform CPU: Single/Dual 4 to 18 cores each

    Memory: Up to 512GB

    Storage : 4 or 8 up to 8TB (RAID 10)

    External Interfaces: Dual GE on-board

    Two PCIe slots (Quad or Dual GE)

    Cisco integrated management controller (CIMC)

    Enterprise NFV UCS-220-M4

    VM VM VM

    NFVIS

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • x86GE

    An NFV Platform with modular optionsISR-4K + x86 on

    UCS-E UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • x86

    GE

    With NFVISWith an SD-WAN solution built in

    WAN

    Internet

    IWANNFVIS

    VNF VNF

    Orchestration &

    Automation

    Along with automation control ISR-4K + x86 on UCS-E UCS x86 Server NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Enterprise NFV Modular Compute Platform

    SupportOne support cost

    Native L2-7 ServicesSecurity, optimisation

    Virtualised Services

    FrameworkAppliance-level

    performance

    Life-Cycle5 7 Years

    Cisco ISR

    4000

    Revolutionary

    Platform

    Architecture

    ReliableBest edge platform

    UCS E-SeriesIntegrated & OIR Support

    compute up to 8 cores

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Models 4331, 4351, and 4451

    Enterprise NFV UCS-E Compute Blade

    UCS-E140S M2 UCS-E160D M2 UCS-E180D M2

    Processor Intel Xeon (Ivy Bridge)E3-1105C v2 (1.8 GHz)

    Intel Xeon (Ivy Bridge)

    E5-2418L v2 (2 GHz)

    Intel Xeon (Ivy Bridge) E5-2428L v2

    (1.8 GHz)

    Core 4 6 8

    Memory 8 - 16 GB DDR3 1333MHz

    8 - 48 GB

    DDR3 1333MHz

    8 - 96 GB

    DDR3 1333MHz

    Storage 200 GB- 2 TB (2 HDD)SATA, SAS, SED, SSD

    200 GB- 3 TB (3 HDD)

    SATA, SAS, SED, SSD

    200 GB- 5.4 TB (3 HDD*)

    SATA, SAS, SED, SSD

    RAID RAID 0 & RAID 1 RAID 0, RAID 1 & RAID 5 RAID 0, RAID 1 & RAID 5*

    Network Port Internal: 2 GE PortsExternal: 1 GE Port

    Internal: 2 GE Ports

    External: 2 GE Ports

    PCIE Card: 4 GE or 1 10 GE FCOE

    Internal: 2 GE Ports

    External: 2 GE Ports

    PCIE Card: 4 GE or 1 10 GE FCOE

    New model for late summer CY16 doubles memory and 50% CPU

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Each VNF may connect externally and/or to other NFV services

    NFVIS Service ChainingToday

    The service may be accessed in multiple ways

    Directly by IP Address

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    AP control

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Each VNF will connect externally and to other NFV services

    NFVIS Service ChainingToday

    The service may be accessed in multiple ways

    Directly by IP Address

    Connected in the packets forwarding path, or stitching

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    DIA Traffic

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Each VNF will connect externally and to other NFV services

    NFVIS Service ChainingToday

    The service may be accessed in multiple ways

    Directly by IP Address

    Connected in the packets forwarding path, or stitching

    Utilize other services to divert packets to it

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    Optimized Traffic

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Service Chaining Connectivity with NSH

    Network Service Header (NSH) will follow to address more advanced needs for service chaining

    Offers new functionality and a dedicated service plane

    Provides traffic steering capabilities AND metadata passing

    Provides path identification, loop detection, service hop awareness, and service specific OAM capabilities

    NSH availability for Phase 2

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    Service Classifier

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Service Chaining Network Services HeaderConnectivity with NSH

    Policy sent to the Service Classifier

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    Service ClassifierPolicy

    Orchestrator/

    Controller

    NSH availability for Phase 2

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Service Chaining Network Services HeaderConnectivity with NSH

    Policy sent to the Service Classifier

    Inbound packets are classified/encapsulated

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    Service ClassifierPolicy

    Orchestrator/

    Controller

    PacketIPNSH PacketIPNSH availability for Phase 2

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • Service Chaining Network Services HeaderConnectivity with NSH

    Policy sent to the Service Classifier

    Inbound packets are classified/encapsulated

    Packet forwarded to VNFs according to policy

    WAAS vWLCISRv ASAv

    Virtualization Layer - KVM

    Service ClassifierPolicy

    Orchestrator/

    Controller

    NS

    H PacketIPNSH availability for Phase 2

    ISR-4K + x86 on UCS-E

    UCS x86 Server

    NFVIS

    ISRv ASAv WAAS vWLC 3rd

    VNFn App1 Appn App2

    ESA + APIC-EM + Prime Infrastructure

    API

    Interface

    Platform

    Management Hypervisor

    Virtual

    Switching

    vNAM

    NFVIS = Network Function Virtualization Infrastructure Software

  • 1. Frame arrives LAN GEx with CSR MAC Address

    2. GE bridged to NFVIS-vSwitch

    3. BR0 of vSwitch connects to ASAv (BR0)

    4. ASAv processes frame and Sends to vSwitch BR1

    5. vSwitch BR1 connects to CSR

    6. CSR sends back to BR1 with destination vWAAS

    7. vWAAS processes (compresses) Packet and sends back to CSR via BR1

    8. CSR routes the frame to WAN GE

    Example Packet FlowsLAN -> WAN

    Hypervisor (KVM)

    NF

    VIS

    ISRv

    IWAN LinuxASAvWAAS

    LAN

    NIC GE0

    vS

    witch

    BR1

    Tap7 Tap6 Tap5

    GE1 GE2 GE3

    Tap4 Tap3

    BR0

    Tap2 Tap1

    WLC Win

    SRCDST DMacCSR SMacSRC Payld

    DMacDST SMacCSR Payld

    Tap7

    WAN

    NIC GE5

    WAN

    NIC GE4

    BR-WAN

  • 1. ARP request sent by Endpoint into GE1

    2. ARP passed by GE into BR0

    3. ARP flooded out all ports1. Reaches all interfaces, VNFs and

    Applications connected to BR0

    4. One of the ARPs also passes to ASAv

    5. vFP transparent forwards the ARP to BR1

    6. ARP flooded in BR1 to reach ISRvand WAAS

    UCS Packet Flow: ARP by LAN Endpoint to WLC

    Hypervisor (KVM)

    NF

    VIS

    ISRv

    IWAN

    WAN

    NIC GE5

    Win/LinASAvWAAS

    LAN

    NIC GE0

    vS

    witch

    BR1

    Tap7 Tap6 Tap5

    GE1 GE2 GE3

    Tap4 Tap3

    BR0

    Tap2 Tap1

    WLC NAM

    WAN

    NIC GE4

    SRC

    ARP

    ARPARPARPARPARPARPARPARPARPBR-WAN

    Tap7

  • L3/L4 transport always done in ISR4K

    WAN:

    NIM Module(4G, Ts, etc.)

    On-Board GE

    LAN:

    Model 1 UCS-E LAN

    Model 2: UCS-E LAN + NIM LAN

    On-board virtualisation adds Snort or WAAS

    ISR + UCS-E Architecture Enterprise NFV

    NF

    VIS

    WLC Win/Lin ASAv

    vSwitch

    Tap0 Tap2

    LA

    N

    NIC

    G

    E1

    PF

    GE

    2

    PF

    Tap3

    BR1BR0

    Tap1

    Hypervisor (KVM)

    IOS

    -XE

    vSwitch

    BR0

    WAAS

    FFP DataPlane (Router)

    GE (MGF)NIM

    FPGA

    Internal NIC

    WAN NIC

    GE0

    PF

    GE1

    PF ISR-4K

    UCSe

    Snort

    Hypervisor (KVM)

    Mgmt NIC

    GE

    PF

    IOSd

  • NF

    VIS

    WLC Win/Lin ASAv

    vSwitch

    Tap0 Tap2

    LA

    N

    NIC

    G

    E1

    PF

    GE

    2

    PF

    Tap3

    BR1BR0

    Tap1

    Hypervisor (KVM)

    IOS

    -XE

    vSwitch

    BR0

    WAAS

    FFP DataPlane (Router)

    GE (MGF)NIM

    FPGA

    Internal NIC

    WAN NIC

    GE0

    PF

    GE1

    PF ISR-4K

    UCSe

    Snort

    Hypervisor (KVM)

    Mgmt NIC

    GE

    PF

    IOSd

    Service path example:

    ASAv -> WAAS -> IOS XE

    WAAS done via AppNav

    LAN connected to UCS-E

    Traffic WAN optimisedbetween WAN interface and WAAS VNF in Service Container

    UCS-E Packet Flow: Go-Through LAN (UCS-E) WAN

  • NF

    VIS

    WLC Win/Lin ASAv

    vSwitch

    Tap0 Tap2

    LA

    N

    NIC

    G

    E1

    PF

    GE

    2

    PF

    Tap3

    BR1BR0

    Tap1

    Hypervisor (KVM)

    IOS

    -XE

    vSwitch

    BR0

    WAAS

    FFP DataPlane (Router)

    GE (MGF)NIM

    FPGA

    Internal NIC

    WAN NIC

    GE0

    PF

    GE1

    PF ISR-4K

    UCSe

    Snort

    Hypervisor (KVM)

    Mgmt NIC

    GE

    PF

    IOSd

    Service Chain example:

    ASAv -> WAAS -> IOS XE

    WAAS done via AppNav

    LAN connected to NIM

    Traffic WAN optimisedbetween WAN interface and WAAS VNF in Service Container

    UCS-E Packet Flow: Go-Through LAN (ISR4K) WAN

  • DEMONSTRATION: Local GUI

    82

  • Conclusion

  • Key Conclusions

    1. Network Function Virtualisation is rapidly maturing and enabling first use-cases TODAY for enterprise network functions

    Virtualisation of control plane functions

    Cloud-based network services

    2. Virtualisation of enterprise network functions enables new architectural approaches leading to potential CAPEX and OPEX savings

    Unclear Benefit from replacement of existing transport infrastructure solutions for the sake of it

    Orchestration and Management put into the spotlight

    3. Architectural details both at the system and network level need to be well understood and examined

    E.g. Service Chaining

  • Call to Action

    Visit the World of Solutions for

    Cisco Campus

    Walk in Labs

    Meet the Engineer

    Cisco Live Berlin Sessions BRKSPG-2063: Cisco vBNG Solution with CSR 1000v and ESC Orchestration

    LTRVIR-2100: Deploying Cisco Cloud Services Router in Public and Private Clouds

    BRKCRS-1244: SP Virtual Managed Services (VMS) for Intelligent WAN (IWAN)

    DevNet Zone

  • Complete Your Online Session Evaluation

    Dont forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online

    Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect.

  • Thank you

  • Appendix A

  • Virtualisation Trade-Offs and Research Topics

  • Main Trade-off and Research Areas

    1. Cost of Virtualisation solution as a function of performance

    2. Trading-off performance for virtualisationflexibility

    Tuning performance may impact virtualisationelasticity

    3. Architectural Considerations

    Capacity planning Service Function Chains?

    Orchestration solution?

    High-Availability requirements?

    CAPEX / OPEX

    PerformanceArchitecture

  • Cost / Performance Trade-offs

    CAPEX Viability for virtualisation may require a minimum VM-packing density on a server

    How many VMs can be deployed simultaneously to achieve a certain CAPEX goal?

    Particularly applicable for Cloud deployment architectures

    What are cost effective deployment models?

    Mixing of application VMs and VNFs on the same hardware?

    Single-tenant / Multi-tenant?

    Hypervisor type?

    Hyperthreading?

    SLA guarantees and acceptable loss rates?

    High-availability requirements and architectures?

  • Architectural Considerations

  • WAN

    Differences between Cloud and Branch VirtualisationUse-Cases

    Focus on cloud orchestration and virtualisation features

    Mix of applications and VNFs may be hosted in the cloud

    Horizontal scaling -> smaller VM footprints

    Dynamic capacity & usage- / term-based billing

    Focus on replacing hardware-based appliances

    Typically smaller x86 processing capacity in the branch

    Virtualised applications (Firewall, NAT, WAAS..) may consume large proportion of available hardware resources

    larger VM footprints

    Cloud orchestration and automation has to be distributed over all branches

    integration with existing OSS desirable for migration

    UCS

    VDI VDI

    DB

    ERP

    Win WinDPI

    UCS

    IPSFirewall

    WAN

    DC

    Branch

    UCS

    VDI VDI

    DB

    ERP

    Win WinDPI

    UCS

    VDI VDI

    DB

    ERP

    Win WinDPI

  • Single-Branch vs. Multi-Branch VM Deployments

    Deployment of multi-tenant VMs can significantly improve the business case

    Leverage multi-tenancy feature set in IOS XE on CSR 1000v

    Leverages different footprint sizes of CSR 1000v, for example

    Deploy small footprint for single-branch & large footprint for multi-branch

    BUT:

    comes with a different operational model (Need to consider multi-tenancy for on-boarding a new branch)

    Has different failure-radius implications

    Branch 1

    WAN DC / Cloud

    Branch N

    Branch 1

    WAN DC / Cloud

    Branch N

  • CSR 1000v as multi-tenant vCPE - Example

    Multi-tenant CSR 1000v deployed for 5 Mbps vanilla branches requiring 5 Mbps each

    Single-tenant CSR 1000v deployed for high-end branches requiring 50 Mbps each

    Note that the 44 VM scenario (Profile 2) is oversubscribed, however the max bandwidth per VM requirement is only 50Mbps

    Profile 1 (multi-tenant)

    1vCPU CSR 400 Mbps

    200 VRFs @ 5Mbps/VRF

    QOS, DHCP Server, Static Route, IP SLA,

    SNMP

    Profile 2 (single-tenant)

    1vCPU CSR - 50Mbps

    QOS, DHCP Server, OSPF, IP SLA,

    IGMPv2, PIM SM, SNMP, ACL2

    Number of VM instances / server chassis 20 44

    Number of branches / VNF instance 40 1

    Total number of branches / server blade 800 44

    Total aggregate bandwidth / server chassis 8 Gbps 2.2 Gbps

  • VNF High-Availability Architecture Considerations

    Traditional Networking: make all critical network services highly-available

    Active-Standby or Active-active redundancy models

    Stateful redundancy for NAT, Firewall (i.e. stateful services)

    Adds architectural complexity

    HSRP, NSR, Stateful HA features

    Does a virtualised environment need HA?

    Depends on PIN

    Branch: YES

    Cloud: MAYBE

    Can rely on reload / re-boot of VMs as this happens much faster

    Function of VM scope (cf. single-branch VNFs)

  • Performance Aspects for VNF Deployments

  • Performance Aspects for VNF Deployments

    Throughput / SLAs for VNFs are determined by a multitude of factors

    System architecture, in particular I/O

    Hypervisor type (VMWare ESXi, KVM, Microsoft HyperV, Citrix XEN..)

    Throughput can be increased significantly by hypervisor tuning and the use of direct-I/O techniques

    Need to determine

    How many VMs to run on a server blade

    Acceptable frame loss rates

  • Hypervisor Impacts on Performance

    VMWare ESXi and KVM schedulers can perform in the same order of magnitude with tuning

    BUT: need to apply tuning recommendations, especially for KVM

    Most impactful tuning: I/O Optimisations (e.g. VM-Fex, SR-IOV)

    KVM currently shows bottlenecks when un-tuned

    descriptor ring restriction in KVM limits performance improvements for larger vCPUVMs

    CEF ACL NAT Firewall QoS HQoSIPSecSingle

    AESIPSecCrypto

    Map

    1vCPU 2.5 2.2 1.4 1.7 2.4 1.5 0.5 0.1

    2vCPU 2.9 2.8 2.4 2.7 3.0 1.8 0.8 0.2

    4vCPU 2.2 2.3 2.1 2.4 2.3 1.4 1.1 0.2

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    3.5

    Throughput(Gbps)

    CSR1000vIOSXE3.16Throughput(Gbps)ESXi:SingleFeature(IMIX,0.01%FLR,C240M3)-CISCOINTERNAL

    CEF ACL NAT Firewall QoS HQoS IPSecSingleAESIPSecCrypto

    Map

    1vCPU 3.0 2.7 1.9 2.2 2.6 2.1 0.7 0.2

    2vCPU 2.9 3.0 2.0 2.3 2.5 1.7 0.8 0.2

    4vCPU 2.0 2.2 1.9 1.9 2.0 1.5 1.0 0.2

    0.0

    0.5

    1.0

    1.5

    2.0

    2.5

    3.0

    3.5

    Throughput(Gbps)

    CSR1000vIOSXE3.16Throughput(Gbps)KVM:SingleFeature(IMIX,0.01%FLR,C240M3)-CISCOINTERNAL

  • KVM Performance Tuning Recommendations

    101

    Use a Direct path I/O technology (SR-IOV w/ PCIe pass-through) with CPU tuning below! Otherwise the following configurations are recommended:

    Tuning

    Recommendation

    Details / Commands Tuning

    Disable Hyperthreading Can be done in BIOS CPU

    Find I/O NUMA Node cat /sys/bus/pci/devices/0000:06:00.0/numa_node

    Enable isolcpus run command numactl -H CPU

    Pin vCPUs sudo virsh vcpupin test 0 6 CPU

    Set CPU in performance Mode run /etc/init.d/ondemand stop. CPU

    Set Procsessor into pass-

    through

    virsh edit

    add this line

    CPU

    Enable / Disable IRQ Balance run service irqbalance start & service irqbalance stop NOTE: ONLY IF IRQ

    PINNING IS DONE!

    CPU

    NUMA-aware VM edit vm config by virsh edit .

    1

    CPU

    IRQ Pinning find specific nic interrupt number from /proc/interrupts. set affinity to other core than

    pinned cpu than for CPU and vHost pinning

    CPU

    REFERENCE

  • Tuning

    Recommendation

    Details / Commands Tuning

    Pin vHost processes sudo taskset -pc 4 ,

    Where is found using ps -ef | grep vhost

    I/O

    Change vnet txqueue length to

    4000

    Default tx queue length is 500

    sudo ifconfig vnet1 txqueuelen 4000

    I/O

    Turn off TSO, GSO, RSO, ethtool -K vnet1 tso off gso off gro off I/O

    Physical NIC Configuration Change rx Interrupt coalescing to 100 for the 10G NICs I/O

    Disable KSM echo 0 > /s`ys/kernel/mm/ksm/run Linux

    Disable Memballoon Edit virsh edit , find memballon in vm config file.

    Please change as

    Linux

    Disable ARP/IP Filtering sysctl -w net.bridge.bridge-nf-call-arptables=0

    sysctl -w net.bridge.bridge-nf-call-iptables=0

    sysctl -w net.bridge.bridge-nf-call-ip6tables=0

    Linux

    Optional Linux Tuning sysctl -w net.core.netdev_max_backlog=20000

    sysctl -w net.core.netdev_budget=3000

    sysctl -w net.core.wmem_max=12582912

    sysctl -w net.core.rmem_max=12582912

    service iptables stop ( if you don't want linux firewall)

    Linux

    KVM Performance Tuning Recommendations (cont.)

    102

    NOTE: these settings may impact the number of VMs that can be instantiated on a server / blade

    NOTE: Tuning steps are most impactful for a small number of VMs instantiated on a host. Tuning impact diminishes with a large number of VMs

    REFERENCE

  • Sample Results of different Performance Improvements

    Quantitative Impact of various hypervisor tuning steps

    defaultw/Hyperthreading HyperthreadingOff vCPUPinningonly Txqueuelenof4000only

    Txqueuelenof4000+vCPUPinning+vhostpinning+txo,rxooff+Hyper

    threadingOff

    AverageThroughputMbps 100% 145% 174% 509% 952%

    0%

    100%

    200%

    300%

    400%

    500%

    600%

    700%

    800%

    900%

    1000%AverageThrough

    put(M

    bps)

    SampleImpactwithdifferentHypervisorTuningsKVM+Ubuntu1.0withOVS,2vCPUCSR1000v,XE3.12Engineeringimage,IMIXtraffic,UCS2202.7GHz,0.01FLR

  • SR-IOV Virtualisation Caveats

    vSphere DPM

    Virtual machine suspend and resu

    Virtual machine snapshots

    MAC-based VLAN for passthrough virtual functions

    Hot addition and removal of virtual devices, memory, and vCPU

    Participation in a cluster environment

    Network statistics for a virtual machine NIC using SR-IOV passthrough

    vSphere vMotion

    Storage vMotion

    vShield

    NetFlow

    VXLAN Virtual Wire

    vSphere High Availability

    vSphere Fault Tolerance

    vSphere DRS

    The following features are not available for virtual machines configured with SR-IOV:

  • VMWare ESXi Fault Tolerance Caveats

    Only works for 1vCPU VMs

    Fault Tolerance is not supported or incompatible in combination with

    Snapshots

    Storage vMotion

    Linked Clones

    VM Backups

    Virtual SAN

    Symmetric multiprocessor VMs

    Physical raw disk mapping

    Paravirtualized guests

    NIC Passthrough

    Hot-plugging devices

    Serial or parallel ports

    IPv6

  • Not tuning ESXi can lead to performance degradations as VMs are added on a server

    vSwitch maxes out between 3 Gbpsand 4 Gbps

    Highlights importance of direct I/O techniques for full-subscription

    For a detailed study, see the latest EANTC report http://www.lightreading.com/nfv/nfv-tests-and-

    trials/validating-ciscos-nfv-infrastructure-pt-1/d/d-id/718684?

    ESXi + vSwitch Full Subscription (XE 3.13)

    http://www.lightreading.com/nfv/nfv-tests-and-trials/validating-ciscos-nfv-infrastructure-pt-1/d/d-id/718684

  • Graphs show total and average (per-VM) throughput under a fully-loaded server NAT+QOS+ACL

    IPSec+QoS+ACL

    Adding VMs to a host does not contribute linearly to system throughput

    OR: average per-VM throughput declines as additional VMs are added

    Marginal differences between hyper-threading on and off!

    Results are similar for 1vCPU CSR 1000v footprints

    Underlying OVS bottleneck not reached!

    Full Subscription Results under KVM+RH for NAT and IPSec for IOS XE 3.16

  • CSR can reach 40 Gbps with 5 VMs with a

    vBRAS configuration

    Offered load of 5Gbps per VM on average

    (test design, not CSR 1000v VM Limit!)

    Multiple VMs to scale control and data plane in

    unison

    Overall server utilisation is about 24% during the

    test (measured with mpstat)

    Translates to somewhere between 8-9 cores being

    utilised (out of 36)

    With most test iterations periodic ingress buffer

    drops per-VM were observed overall the

    number of drops was

  • Loss Rate Interpretation - Background

    Performance results vary depending on what acceptable frame loss is defined. Typical definitions for loss rates (FLR) range from Absolutely 0 packets lost -> Non-drop

    Rate

    5 packets lost

    0.1% of PPS lost

    Small relaxation of FLR definition can lead to significant higher throughput

    Typically FLR Test data reported for 5 packet loss (to account for warmup) with multiple consecutive 1 minute runs

    0%

    20%

    40%

    60%

    80%

    100%

    120%

    140%

    160%

    180%

    0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75

    Norm

    alizedThrough

    put(%

    ,NDR=100%)

    %ofacceptabletrafficlossperVM

    Throughputasafunc onofacceptableTrafficLoss(%,normalized,KVM,XE3.13)

    %increaseinThroughput

  • Determination of Desired Frame Loss Rate

    Throughput can be affected by definition of acceptable loss rates

    Tests measure % of dropped traffic for various traffic loads

    Offer traffic load -> observe loss -> reduce offered load until desired loss rate reached

    BUT: Difficult to get consistent data across multiple runs.

    How to interpret the right loss-rate?

    Example:

    Highest rate at which LR of 0.01% appears -> 475 Mbps

    Lowest rate below which LR of 0.01% is ALWAYS observed -> 374 Mbps

    Loss rate violations at {445, 435, 414, 384} Mbps

    Sample Data

    REFEREN

    CE

  • Appendix B

  • Glossary Abbreviation Description Abbreviation Description Abbreviation DescriptionACL Access Control List CFS Completely Fair scheduler DPM Distributed Power Management

    ANCP Access Node Control Protocol CFS Customer Facing Services DRS Dynamic Resource Scheduling

    ARP Address Resolution Protocol CGN Carrier Grade NAT DSCP DiffServe code Point

    ASA Application Security Appliance CLI command Line Interface Eap Extensible Authentication Protocol

    AVC Application Visibility & Control CM Chassis Manager (in IOS XE) EOAM Ethernet OAM

    BFDbi-directional Forwarding detection

    CoA RADIUS Change of Authorization ESA Email Security Appliance

    BPDU Bridge Protocol Data Unit COS Class of Service ESC Elastic Services Controller

    BRAS Broadband Remote Access Server COTS Common off-the-shelf ESXi VMWare hypervisor

    BSS Business support system CPS Calls per second EVC Ethernet Virtual Circuit

    CAPEX Capital Expenditures DC Data Center F/D/C Fibre / DSL / Cable

    CDP Cisco Discovery Protocol DCI Data Center Interconnect FFPFast Forwarding Plane (data plane in IOS XE)

    CE Carrier Ethernet DHCP Dynamic host configuration Protocol FLR Frame Loss Rate

    CE Customer Edge DNS Domain Name System FM Forwarding Manger (in IOS XE)

    CEF Cisco Express Forwarding DPDK Data Path Development Kit FSOL First Sign of life

    CFMConfiguration and Fault Management\

    DPI Deep Packet Inspection FT Fault Tolerance

  • Glossary Abbreviation Description Abbreviation Description Abbreviation Description

    FW Firewall IPoE IP over Ethernet LRO Large Receive Offload

    GRE Generic Route Encapsulation IPS Intrusion Prevention System MC PfR Master Controller

    GRT Global routing table IRQ Interrupt Request MP-BGP Multiprotocol BGp

    GSO Generic Segmentation Offload ISG Intelligent Services Gateway MPLS EXPMulti Protocol Label Switching EXP field

    GTm Go-to-market ISG TC ISG Traffic class MS/MR LISP Map Server / Map Resolver

    HA High Availability IWAN Intelligent WAN (Cisco Solution) MSP Managed Service Provider

    HQF Hierarchical Queueing Framework KSM kernel same-page merging MST Multiple Spanning Tree

    HQOS Hierarchical QOS KVM Kernel Virtual Machine NAT Network Address Translation

    HSRP Hot Standby Routing Protocol L2TPv2 Layer 2 Transport Protocl version 2 NB Northbound

    HT Hyperthreading LAC L2TP Access Concentrator NE Network Element

    HV Hypervisor LAG Link Aggregation NF netflow

    I/O Input / Output LB Loadbalancer NfV Network Function Virtualization

    IDS Intrusion Detection System LCM Life-cycle manager (for VNFs) NFVI NFV Infrastructure

    IP SLA IP Service Level Agreements LNS L2TP Network Server NFVO NFV Orchestrator

    IPC inter-process communication LR Loss Rate NIC network Interface card

  • Glossary Abbreviation Description Abbreviation Description Abbreviation DescriptionNID Network Interface Device PnP Plug and Play RSO Receive Segmentation Offload

    NSO Network Services Orchestration POF Prime Order Fulfilment Rx Receive

    NUMA non-uniform memory access PoP Point of presence SB Southbound

    NVRAMnon-volatile Random Access Memory

    PPE Packet Processing Engine SBC Session Border Controller

    OAMOperations, administration and maintenance

    PPS Packets per Second SC Service chaining

    OPEX Operational Expenditures PSC Prime Services Catalog SDN Software Defined Networking

    OS OpenStack PTA PPP termination and Aggregation SFService Function (in SFC Architecture)

    OSS Operations support System PW Pseudowire SFC Service Function Chaining

    OVS Open Virtual Switch PxTR Proxy Tunnel Router (LISP) SFFService Function Forwarder (in SFC Architecture)

    PBHK Port Bundle host key (ISG feature) QFP Quantum Flow Processor ( SGT Security Group Tag

    PE Provider Edge QOS Quality of Service SIP SPA Interface Processor

    PF Physical Function (in SR-IOV) RA Remote Access SLA service level agreement

    PfR Performance Routing REST Representational State Transfer SLB Server Loadbalancing

    PMD Pull mode driver RFS Resource Facing Services SMB small and medium Business

    pNIC Physical NIC RR Route Reflector SNMPSimple Network Management Protocol

  • Glossary Abbreviation Description Abbreviation Description Abbreviation Description

    SP Service Provider VM Virtual Machine WAN Wide Area Network

    SPA Shared Port Adapter vMS Virtual Managed Services WLAN Wireless LAN

    SR-IOV single Root I/O virtualization VNF Virtual Network Function WLC Wireless LAN Controller

    TCO Total Cost of Ownership VNFM VNF Manager WRED weighted random Early Detection

    TOS Type of Service vNIC virtual NIC ZBFW Zone-based firewall

    TPS transparent page sharing VPC Virtual Private Cloud ZTP Zero touch provisioning

    TSO TCP Segmentation Offload vPE-F virtual PE Forwarding instance

    TTM Time-to-market VPLS Virtual Private LAN service

    UC Unified communication VPN Virtual Private Network

    vCPE virtual CPE VRF virtual routing and forwarding

    vCPU virtual CPU vSwitch virtual Switch

    VF virtual Function (in SR-IOV) VTC Virtual Topology controller

    vHost virtual host VTF Virtual Topology Forwarder

    VIM Virtual Infrastructure Managers VTS Virtual Topology System

    VLAN virtual Local area network WAAS Wide Area Application Services

  • Appendix C

  • Cisco ASAv Firewall and Management Features

    Cisco ASA Feature Set

    Cisco

    ASAv10

    ASAv30

    Removed clustering and

    multiple-context mode

    Parity with all other Cisco ASA platform features

    10 vNIC interfaces and VLAN tagging

    Virtualization displaces multiple-context and clustering

    SDN (Cisco APIC) and traditional (Cisco ASDM and CSM)

    management tools

    Dynamic routing includes OSPF, EIGRP, and BGP

    IPv6 inspection support, NAT66, and NAT46/NAT64

    REST API for programmed configuration and monitoring

    Cisco TrustSec PEP with SGT-based ACLs

    Zone-based firewall

    Equal-Cost Multipath

    Failover Active/Standby HA model

  • Protection Across the Attack Continuum with FirePOWERv

    Virtual machine discovery

    Enforce application policy

    Access control to segment security zones

    Visibility into virtual network communications

    Protect VMs even as the migrate across hosts

    Intrusion prevention without hairpinning

    Single pane-of-glass across physical and virtual networks

    Automated response via Integration with platform security controls

    BEFOREDiscover

    Enforce

    Harden

    AFTERScope

    Contain

    Remediate

    Attack Continuum

    Detect

    Block

    Defend

    DURING

  • FirePOWERv Virtual Defense Center Deployed as virtual appliance

    Inline or passive deployment

    Full NGIPS Capabilities

    Add-on capability Control

    Advanced Malware Protection

    URL Filtering

    Deployed as virtual appliance

    Manages up to 25 sensors

    physical and virtual

    single pane-of-glass

    DC

    Virtual IPS Appliances

  • Virtualised WAAS

    Hypervisors ESXi

    Hyper-V

    KVM

    Interception Methods* AppNav

    WCCP

    Platforms UCS or other x86

    Service Container on ISR-4000

    UCS-E

    Hypervisor

    UCS /x86 Server

    ISR 4000 Series + UCS E-Series

    Platform Variants (TCP)

    vWAAS-200

    vWAAS-750

    vWAAS-1300

    vWAAS-2500

    vWAAS-6000

    vWAAS-12000

    vWAAS-50000

  • Branch Office - Local WLAN Controller

    Branches can have local controllers

    Small or Mid-size Branch with vWLC

    Overview

    Remote Site B

    Remote Site A

    vWLC

    WLC25xx

    Backup Central

    Controller

    WAN

    Central Site

    Remote Site C

    Cat-3850

    CAPWAP

    Cookie cutter configuration for every branch site

    Independency from WAN quality

    Advantages

  • FlexConnect Mode: On Premise or Data Centre

  • Virtualisation of Transport/Forwarding

  • Shared Services

    Enterprise Virtualisation ModelsTransport Functions

    Virtualisation of Transport plane functions

    L3 routing and packet forwarding

    Packet divert

    Can be on-premise or in larger Enterprise WAN PoPs or in the cloud

    IOS XRv

    CSR 1000v

    Virtual router forwarding engine

    AppNav clustering (WAAS)

    WCCP/PBR

    NSH*

    WAN

    Campus

    Routing

    Diversio

    n

    NSH estimate is for July/August 2016*

    Cloud Hypervisor Virtual Switch

    VPC/ vDC

    OS

    App

    OS

    App

    CSR 1000V

  • Example: AX Transport and CSR 1000v

    CSR 1000v using AppNav for Service Insertion

    ASR ISR

    ISR 10.1.1.1 (VRF A)

    10.1.1.1 (VRF B)

    VRF B

    Branch

    VRF A

    vWAAS

    CSR

    vWAAS

    CSR

    vWAAS

    WAN

    Internet

  • Q & A

  • Complete Your Online Session Evaluation

    Learn online with Cisco Live!

    Visit us online after the conference

    for full access to session videos and

    presentations.

    www.CiscoLiveAPAC.com

    Give us your feedback and receive a

    Cisco 2016 T-Shirt by completing the

    Overall Event Survey and 5 Session

    Evaluations. Directly from your mobile device on the Cisco Live

    Mobile App

    By visiting the Cisco Live Mobile Site http://showcase.genie-connect.com/ciscolivemelbourne2016/

    Visit any Cisco Live Internet Station located

    throughout the venue

    T-Shirts can be collected Friday 11 March

    at Registration

    http://www.ciscoliveapac.com/http://showcase.genie-connect.com/ciscolivemelbourne2016/

  • Thank you

  • Appendix

  • Hypervisor Traversal Tax: Example KVM with OVS

    KVM with OVS consumes a vHost thread per configured VM interface

    The vHost thread is very CPU intensive, requires dedicated physical core

    On 16-core server, can only get 3 CSR1000v (2vCPU, 2 i/f each)

    Cores for CSR: 6

    Cores for vPE-F: 2

    Cores for vHost: 6

    Free: 2

    Should be considered when service chaining

    Hypervisor traversal

    tax = 8/16 = 50%

    May not be

    fully utilized!

  • Hypervisors vs. Linux Containers

    Hardware

    Operating System

    Hypervisor

    Virtual Machine

    Operating

    System

    Bins / libs

    App App

    Virtual Machine

    Operating

    System

    Bins / libs

    App App

    Hardware

    Hypervisor

    Virtual Machine

    Operating

    System

    Bins / libs

    App App

    Virtual Machine

    Operating

    System

    Bins / libs

    App App

    Hardware

    Operating System

    Container

    Bins / libs

    App App

    Container

    Bins / libs

    App App

    Type 1 Hypervisor Type 2 Hypervisor Linux Containers (LXC)

    Containers share the OS kernel of the host and thus are lightweight.

    However, each container must have the same OS kernel.Containers are isolated,

    but share OS and, where

    appropriate, libs / bins.