62

Agile OpenStack Networking with Ciscod2zmdbbm9feqrf.cloudfront.net/2016/eur/pdf/DevNet-1107.pdf · Agile OpenStack Networking with Cisco ... Open source Cloud Computing Platform for

Embed Size (px)

Citation preview

Agile OpenStack Networking with Cisco

solutionsRohit Agarwalla, Technical Leader

DEVNET-1107

[email protected], @rohitagarwalla

• Introduction to OpenStack

• Cisco and OpenStack

• OpenStack Networking – Neutron

• Neutron Network Architectures

• Cisco Integrations into Neutron

• Summary

Agenda

3

Introduction to OpenStack

4

OpenStack Overview

Design tenets – scale & elasticity, share nothing & distribute everything

Open source Cloud Computing Platform for Private and Public Clouds

5

OpenStack Projects

Compute (Nova) Dashboard (Horizon) Database (Trove)

Network (Neutron) Image (Glance) Orchestration (Heat)

Object Storage (Swift) Identity (KeyStone) Data Processing (Sahara)

Block Storage (Cinder) Telemetry (Ceilometer) Deployment (Triple O)

Bare Metal (Ironic) DNS (Designate) Application Catalog (Murano)

Containers (Magnum) Key Management (Barbican) Policy (Congress)

File System (Manila) Messaging (Zaqar) ….

6

OpenStack Progress

Austin – Oct 2010

Bexar – Feb 2011

Cactus– April 2011

Diablo – Sept 2011

Essex – April 2012

Folsom– Sept 2012

Grizzly– April 2013

Havana – Oct 2013

IceHouse– April 2014

Juno – Oct 2014

Kilo – May 2015

130 contributors

30 new features

2010

2011

2012

2013

2014

Started with Compute

and Storage service

12th OpenStack release

1933 contributors

760 new features

8300 bugs fixed

164 companies

Liberty – Oct 2015

24,000 people

495 companies

Mitaka – April 2016

2015

Newton – Oct 20162016

7

Cisco and OpenStack

8

Cisco and OpenStack

• Cisco Validated Designs, UCSO

• Work closely and jointly with customers to design and build OpenStack environment

• OpenStack based Global Intercloud hosted across Cisco and partners data centers

• Metapod (Formerly MetaCloud)

• Neutron/Cinder/Ironic Plugins/Drivers for Cisco infrastructure – Nexus, APIC, CSR1K, ASR1K, UCS

• Cisco Applications on OpenStack

• Code contributions across several services – Network. Compute, Dashboard, Storage, Containers

Community Participation

Engineering

Partners/ Customers

Cloud Services

• Incubating new OpenStack related Projects – GBP, PlaceWise, AVOS, VMTP

9

• Transport Layer Security

• Sub-ordinate certificate

feature

OpenStack primary project code contributions by Cisco

Kilo + Liberty release

Gnocchi

Kolla

Magnum

Neutron

HorizonDevstack

Metering

Barbican

Heat

• Multiple IPv6 prefixes, IPv6 PD

• IPv6 router support

• VLAN trunking

• UCSM, Nexus driver

• ASR1000 driver

• CSR1Kv VPN driver

• Archive Policy per metric level

• New resources for Neutron PCI

Passthrough and Nova Flavor

• Heat template improvements

• Neutron IPv6 and L3

plugin support

• Kafka Publisher

• Alarms severity

• Network services notification

plugin

• Resource metadata caching

• Curvature panel

• Ceph panel

• Containers - Ceilometer, Mongo,

Neutron

• Container Sets - database-control,

messaging-control, service-control,

compute-control, compute-

operation-nova

• Kubernetes plugin

• Python API for k8s CLI

• Container Networking Model

10

OpenStack Networking -Neutron

11

OpenStack Network Architecture

Tenant A Compute

Node (s)

Running

Compute and

Network

agents

Controller

Node(s)

Running

Database,

Message

Queue Server,

API Services,

Scheduler..

Router

Network

Node(s)

Running

Network

Service

Agents

API Network

External Network

Internet

Data Network

Management Network

Network Purpose IP

Address

Management

Network

Used for internal

communication between

OpenStack Components

Reachable

only within

the data

center

External

Network

Used to provide VMs

with Internet access

Reachable

by anyone

from the

Internet

API Network Exposes all OpenStack

APIs, including the

OpenStack Networking

API, to tenants

Reachable

to Tenants

Data Network Used for VM data

communication within the

cloud deployment.

Reachable

within the

Tenant

address

space

12

Neutron Overview

Tenant A Router

Subnet Red Subnet Blue

VM 1

Tenant A

VM 2 VM 1

Logical Model

Physical implementation

Compute

Node

Compute

Node

VM1 Controller

Node(s)

Router

Network

Node(s)

External Network

VM2 VM1

Internet

vswitch vswitchvswitch

Data Network

Namespace

Management Network

API Network

13

OpenStack Neutron Architecture

Neutron Server

REST API

Neutron Core

pluginsNeutron Service

plugins

• Core + Extension REST API’s

• Message Queue for communicating with

Neutron Agents

• Core and Service Plugins

• Different vendor core plugins

• Different network technology support

• ML2 plugin with Type and Mechanism

Drivers

• Service plugins with backend drivers

Core API

Network Port Subnet

Resource and Attribute Extension API

ProviderNetwork PortBinding Router Quotas SecurityGroups AgentScheduler LBaaS FWaaS VPNaaS ….

Lo

ad

Ba

lan

ce

r

Fir

ew

all

VP

N

HA

Pro

xy

IPT

ab

les

Str

on

gS

wa

n

L3

Se

rvic

es

Nam

esp

ace

Type Drivers Mechanism Drivers

VL

AN

GR

E

VX

LA

N

Cis

co

Nexu

s

OV

S

Op

en

DayL

igh

t

AP

IC

Mo

re v

en

do

r

dri

ve

rs

ML

2

Oth

er

ve

nd

or

plu

gin

s

DHCP Agent

L3 Agent

Message

Queue

IPTables on

Network

Node

L2 Agent

vSwitch

dnsmasq

14

Neutron Architectures

15

Layer 2 network tenant topologies

Compute

Node

Compute

Node

VM3 VM4 VM2

vswitch vswitch

Data Network

VM1

Fabric Leaf, Top of Rack

Compute

Node

Compute

Node

VM3 VM4 VM2

vswitch vswitch

Data Network

VM1

Fabric Leaf, Top of Rack

Host and Network based VLAN Host based overlays

Compute

Node

Compute

Node

VM3 VM4 VM2

vswitch vswitch

Data Network

VM1

Fabric Leaf , Top of Rack

Network based overlays

VLAN Overlay16

Layer 2 network tenant topologies – Design Considerations

• Number of Tenant Network Segments

• VLAN based tenant networks

• Host

• Host and Network

• VXLAN based tenant networks

• Host

• VXLAN offload - Network

• Multicast v/s Controller

17

Compute

Node

vswitch

Layer 3 tenant network topologies

Linux Host

Compute

Node

VM1

Network

Node(s)

VM2

vswitchvswitch

Data Network

Namespace

Service VMs

Fabric, Top of

Rack

VM1

Compute

Node

VM2

vswitch

Data Network

Service VMs

Fabric, Top of

Rack

Compute

Node

VM1

Network

Node(s)

VM

vswitch

Data Network

Fabric, Service Node

Fabric or Service Node

vswitch

18

Layer 3 network tenant topologies – Design Considerations

• Number of Tenant Routers

• External connectivity for tenant networks

• Floating IPs

• L3 Traffic Pattern E-W and N-S Routing

19

Cisco integrations into Neutron

20

Neutron Layer 2 Default Implementation

Neutron Server

Neutron Core plugin

(ML2)

Network REST API requests

Open vSwitch/Linux

Bridge

Mechanism Drivers

Compute Node

Network and

Compute Nodes

VM VM

vswitchRPC message to agent on nodes

• Implements Neutron Core Resources

• Open vSwitch and Linux Bridge Mechanism Drivers

• Agents on Network and Compute Nodes

• Host based VLAN or Overlay (VXLAN, GRE) Type Drivers

21

Nova HostNova HostNova Host

Neutron Reference – East-West L2 (Switched) Traffic

VM1Controller

Host(s)

Router

Neutron

Host(s)

DHCP ports

API NetworkExternal Network

Management Network

VM6VM5VM2 VM3 VM4

Internet

vswitch vswitch vswitchvswitch

Data Network

PKT

Packet path animation for packet

traveling from VM1 VM3.

22

VM on a Compute

Nodes

Neutron Cisco Nexus Driver

Neutron Server

Neutron Core

plugin (ML2)

Cisco Nexus Driver

Ncclient

Nexus

Nova

Compute Nodes

create/update port request sent to Neutron

Features

•Works with multiple Nexus platforms

•VLAN configuration

•VXLAN configuration

• Nexus_VXLAN Type Driver

• Multicast

• VLAN to VNI association

Benefits

•No Trunk all tenant VLANs on compute node interfaces on ToR

•Dynamic provisioning/deprovisioning on ToR

•Network based overlays

Nexus ToR

VM VMnetconf

23

VMs on Compute

Node

N1Kv VEM

Compute Nodes

Neutron Cisco Nexus1000v Driver (KVM)

Neutron Server

Neutron Core

plugin (ML2)

Cisco N1Kv Driver

N1Kv VSM

Features:

•Associate Network Profiles to Neutron Networks

•Associate Policy Profiles to Neutron Ports

•Supports VLAN and VXLAN (unicast and multicast) network segmentation

•Horizon integration

Benefits

•Logical grouping of network segments

•Security, Monitoring, Quality of Service (QoS)

•Enhanced visibility and manageability of virtual machine traffic

REST API

Nova

Network Profile:Network Segment Pool

Policy Profile:Port Profile

VM VM

N1Kv VSM

24

VMs on Compute

Node

Neutron Cisco UCSM Driver (KVM)

Neutron Server

Neutron Core

plugin (ML2)

Cisco UCSM driver

UCS Fabric

Interconnect

UCSM SDK

Compute Nodes

Nova

create/update portFeatures:

•Nova and Neutron enhancements to support SR-IOV

•Supports VLAN configuration of SR-IOV ports (using port profiles) and vNIC ports (using Service Profiles)

•Enables configuration of VLAN profiles and automatic association with network ports

Benefits

•SR-IOV and non SR-IOV based UCS Fabric Interconnect configurations

•Configures multiple UCSMs

VM VM

25

Neutron DHCP Implementation

Neutron Server

Neutron DHCP

Service

Network REST API requests

Compute Node

Network Node

DNSMASQRPC message to agent on nodes

• Namespace and dnsmasq for every network

• Dnsmasq Reloads with every port add/delete

DHCP agent

26

Nova HostNova HostNova Host

Neutron Reference – DHCP Traffic

VM1Controller

Host(s)

Router

Neutron

Host(s)

DHCP ports

API NetworkExternal Network

Management Network

VM6VM5VM2 VM3 VM4

Internet

vswitch vswitch vswitchvswitch

Data Network

DHCP request/response animation for

packet traveling from VM1 DHCP port.

PKT

27

CPNR

Neutron DHCP Implementation with Cisco Prime Network Registrar (CPNR)

Neutron Server

Neutron DHCP

Service

Network REST API requests

Compute Node

Network Node

DHCP Relay

CPNR

• DHCP configuration includes CPNR

API end point configuration

• Mapping –

• Network to Virtual Private Network (VPN)

• Subnet to Scope

• Request and Responses handled using UDP ports

• Benefits

• Relay is stateless and can be run in Active-Active

• Highly Available CPNR Server for all tenants

REST API DHCP Traffic

RPC message to agent on nodes

DHCP agent

28

Neutron Routing Implementation

Neutron Server

Neutron Service

plugin (L3)

Routing REST API requests

L3 agent on

Network Node

L3 agent on

Network Nodes

Default Gateway,

Namespace and

IPTables

Namespace maps to a Neutron logical router. IPTables handle address translations

Agent Scheduler

Picks a L3 agent on a Network Node

Compute Node

Compute Nodes

L3 traffic goes through Network node

VM VM

Neutron router HA capabilities using VRRP

29

Nova HostNova HostNova Host

Neutron Reference – East-West L3 (Routed) Traffic

VM1Controller

Host(s)

Router

Neutron

Host(s)

API NetworkExternal Network

Management Network

VM6VM5VM2 VM3 VM4

Internet

vswitch vswitch vswitchvswitch

Data Network

PKT

Routing

Packet path animation for packet

traveling from VM1 VM4

Virtual Router

30

Nova HostNova HostNova Host

Neutron Reference – North-South L3 Traffic (NAT)

VM1Controller

Host(s)

Router

Neutron

Host(s)

API NetworkExternal Network

Management Network

VM6VM5VM2 VM3 VM4

Internet

vswitch vswitch vswitchvswitch

Data Network

PKT

NAT

Packet path animation for packet

traveling from VM1 Internet

Virtual Router

31

Issues in Neutron Reference L3 and ASR1K Solutions• NAT for External Connectivity:

• Issue - Scale limitation in Linux iptables software NAT.

• Solution - ASR1K can scale up to 4 million dynamic NAT entries and 16K static NAT entries.

• Tenant Routing:

• Issue - Scale limitations in Linux namespaces based software tenant networking.

• Solution - ASR1K uses Virtual Routing and Forwarding (VRF) instances for tenant routers. ASR1K can scale up to 4k VRFs (8k in upcoming release).

• Tenant Networks:

• Issue- Scale limitations in Linux software based interfaces.

• Solution - ASR1K plugin maps tenant networks to sub-interfaces on ASR1K. ASR1K supports up to 64k sub-interfaces.

• Data Throughput:

• Issue - Performance limitations with software packet forwarding and NAT on generic compute hardware.

• Solution - ASR1K can perform packet forwarding and NAT at rates upto 230 Gbps.

32

Neutron Cisco ASR1000 for Neutron L3 Service

• Mapping of Neutron reference L3 implementation -

• Linux namespaces - ASR1K VRF

• Internal Router ports – ASR1K VLAN or Port Channel sub interfaces

• External Gateway ports – ASR1K VLAN or Port Channel sub interfaces

• Linux IPTables – ASR1K NAT

Neutron Server

Neutron Service plugin

(L3)

Routing Device Driver

(ASR1K)

Config AgentCisco Config Agent

NexusASR1K

netconf

• Benefits

•Routing using physical infrastructure

•Support for HSRP and Port Channel

•Neutron Multi-region Support33

OpenStack Neutron + Nexus + ASR : Physical Topology Example

Layer-3

Network Core

ASR 1000

Routers

OpenStack Controller

Neutron Server with

Cisco Config AgentNova Compute Nodes

Nexus Layer-2 Fabric

Tenant VLANs and

External Traffic

Management Network (NETCONF provisioning)34

ASR1K

Neutron

Host(s)Nova HostNova HostNova Host

ML2 Nexus and ASR1K - East-West L3 (Routed) Traffic

VM1Controller

Node(s)

RouterAPI NetworkExternal Network

Data Network

(L3 routed)

Management Network

VM6VM5VM2 VM3 VM4

Internet

ML2 Nexus Driver

vSW vSW vSW

Nexus TOR Nexus TOR

ASR1K

L3

Plugin

VRF with default GW and NAT (to global routing).

PKT

Note : Packet animation included –

VM1 VM4

Virtual Router

35

ASR1K

Neutron

Host(s)Nova HostNova HostNova Host

ML2 Nexus and ASR1K - North-South L3 Traffic (NAT)

VM1Controller

Node(s)

RouterAPI NetworkExternal Network

Data Network

(L3 routed)

Management Network

VM6VM5VM2 VM3 VM4

Internet

ML2 Nexus Driver

vswitch vswitch vswitch

Nexus TOR Nexus TOR

ASR1K

L3

Plugin

VRF with default GW and NAT (to global routing).

PKT

Note : Packet animation included –

VM1 Internet

Virtual Router

36

Neutron Cisco CSR1000v for Neutron L3 Service

• Mapping of Neutron reference L3 implementation -

• Linux namespaces - CSR1Kv VRF

• Router ports (qr) on bridge –CSR1Kv VLAN sub interfaces

• Gateway ports (qg) on bridge -CSR1Kv VLAN sub interfaces

• Linux IPTables – CSR1Kv NAT

• Benefits

• Virtual Form Factor

• Integrates with N1Kv and OVS

• Device that can offer more services

REST API/netconf

Neutron Server

Neutron Service plugin

(L3)

Cisco CSR1Kv Device

Driver

Device

ManagerScheduler

Config Agent

VMs on Compute

Node

Cisco Config Agent

Nova

Compute Nodes

CSR1Kv

VM

37

VMs on Compute

Nodes

Neutron Cisco Application Policy Infrastructure Controller (APIC) Driver

Neutron Server

Neutron Core

plugin (ML2)

Cisco L2

APIC Driver

APIC

VMs on Compute

Nodes

Cisco L3

APIC Driver

ACI Spine/Leaf

Switches

REST APINetwork:EPG, Router:Contract

Provides distributed L2,L3 functionality

Neutron L3

Plugin Neutron API: Network, Router,

Subnet, Security Group

L2 / L3 enforced in fabric,

security groups enforced on

hypervisor

38

Scaling Neutron Networks Using Hierarchical Port Binding on Cisco ACI

Martin Klein, SAP

Feb 17, 2016

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 40Internal

Agenda

Where do we come from

Overview of the SAP Monsoon Converged Cloud

Where do we want to go

Transitioning a private IAAS Platform to Openstack

How do we want to get there

Architecting Openstack Neutron to enable growth

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 41Internal

SAP Monsoon Converged Cloud

SAP Internal IAAS/PAAS Platform

Currently running In-House SW Stack with some Openstack Components. Offering Custom API and Amazon

EC2 compatible API’s.

IAAS

Global footprint with currently 6 Regions on 4 Continents

Provide a unified global platform for SAP’s Cloud offerings

Offers Compute, Block Storage and limited Networking

PAAS

Focus on automation and Continuous Delivery using OpsCode Chef

Optional for customers which don’t bring their own PAAS service

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 42Internal

SAP Monsoon Converged Cloud Current Scale

Platform is offered in 6 Regions extending to 13 in 2016

Absolute Size

CPU 17.000 Cores

Memory 500 TB

Storage 5.2 PB

Instances 21.000

Volumes 44.000

Operations

Instance Operations 3.000/Day

Instance Growth 250/Day

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 43Internal

SAP Monsoon Converged Cloud Openstack Transition

Replace all infrastructure controllers with Openstack implementation

Existing Implementations

Replace with standard Openstack implementations

Transition custom services to an Openstack like schema

Introduce a thin layer on top of Keystone to to reflect special requirements

New Features

Object Storage

Full Neutron networking

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 44Internal

SAP Monsoon Converged Cloud Key Challenges

Running Openstack on an Enterprise Hardware Stack

Openstack Challenges

Running Nova in a Multi Hypervisor Environment

Scaling Neutron beyond the limits of VLAN

Infrastructure Challenges

Scaling a network Fabric beyond 4k L2 Networks

Attaching arbitrary devices to the Fabric without additional requirements on connected devices

Finding a universally available overlay protocol

Neutron Hierarchical Port Binding

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 46Internal

Neutron Flat Port Binding

Neutron Network

Network Details (Global)

network_type: vxlan

segmentation_id: 14410

Network requirements

All devices use one protocol

All network layers are protocol aware

Core Network

Edge Switch Edge Switch

HVStorage

Device

Network

DeviceHV

VM VM

Port Port

VNI:14410 VNI:14410

VNI:14411 VNI:14411Port Port

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 47Internal

Neutron Hierarchical Port Binding

Neutron Network

Network Details (Global)

network_type: vxlan

segmentation_id: 14410/11

Network Local (local)

network_type: vlan

segmentation_id: local

Network requirements

Core/Edge devices share protocol

Connected devices are not overlay aware

Core Network

Edge Switch Edge Switch

HVStorage

Device

Network

DeviceHV

VM VM

Port Port

VLAN: 40 VLAN:10

VLAN:10 VLAN:15

VNI:14410-14411 VNI:14410-14411

VLAN: 40 VLAN:10 VLAN:15 VLAN:10

Port Port

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 48Internal

ACI Implemtation I

ACI Network

Network Details (Global)

Network = BD/EPG pair

Identified by Neutron ID

Network Local (local)

Segment = Physical Domain

Node = Static Path Binding

ACI Spine

ACI Leaf ACI Leaf

HVStorage

Device

Network

DeviceHV

VM VM

Port Port

VLAN: 40 VLAN:10

VLAN:10 VLAN:15

VL

AN

:40

EPG 1EPG 2 EPG 1 EPG 2

VL

AN

:10

VL

AN

:10

VL

AN

:15

Static

Binding

Port Port

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 49Internal

ACI Implementation IINeutron Networks as EPG

Neutron Network = Bridge Domain

Neutron Network = Endpoint Group

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 50Internal

ACI Implementation IIIACI Static Bindings

Static Binding = Physical Connection to Openstack Node

Neutron Network = Endpoint Group

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 51Internal

ACI Implementation IVNeutron Configuration plugin.ini

Neutron plugin.ini without hierarchical driver

Neutron plugin.ini with hierarchical driver

Hierarchical Drivers need to be on top of the list for partial binding to occur.

© 2016 SAP SE or an SAP affiliate company. All rights reserved. 52Internal

ACI Implementation V Code

Neutron ml2 mechanism driver currently not upstreamed

Differences with cisco aci ml2 mechanism driver

No L3 support

Not bound to VLAN network type

No LACP host discovery

Currently working with Cisco ACI development to merge functionality into a single mechanism driver.

© 2016 SAP SE or an SAP affiliate company. All rights reserved.

Thank youContact information:

Martin Klein

Cloud Infrastructure Architect

[email protected]

Group-Based Policy Model

Policy Group: Set of endpoints with the same

properties. Often a tier of an application.

Policy RuleSet: Set of Classifier / Actions

describing how Policy Groups communicate.

Policy Classifier: Traffic filter including

protocol, port and direction.

Policy Action: Behavior to take as a result of a

match. Supported actions include “allow” and

“redirect”

Service Chains: Set of ordered network

services between Groups.

L2 Policy: Specifies the boundaries of a

switching domain. Broadcast is an optional

parameter

L3 Policy: An isolated address space

containing L2 Policies / Subnets

Policy

Rule Set

Policy Rule

Policy Rule

Service Chain

Classifier Action

Classifier Action

L2 Policy

Policy

Group

Policy Target

Policy Target

Policy Target

Policy

Group

Policy Target

Policy Target

Policy Target

L2 Policy

provide consume

Node Node

L3 Policy

57

Group Based Policy and Neutron

VMs on Compute

Nodes

Group Based Policy (GBP)

GBP Neutron

Driver

NeutronAPIC

VMs on Compute

Nodes

APIC GBP

Driver

ACI Spine/Leaf

Switches

REST API

Policy Group, Ruleset

Provides distributed L2,L3 functionality

GBP Driver

Neutron

Plugins/Driv

ers

Network, Router

Create Classifier/ Rule

gbp policy-classifier-create web-traffic –

protocol tcp –port-range 80 –direction in

gbp policy-rule-create web-policy-rule –

classifier web-traffic –actions allow

Create Policy RuleSet

gbp ruleset-create web-ruleset –policy-

rules web-policy-rule

Create Group

gbp group-create web

Group Association

gbp group-update web –provided-rulesets

web-ruleset

Launch Web Server VM using Endpoint in

EPG

gbp member-create –group web web-1

vswitch

58

Purpose Using Cisco Product Kilo Code Availability Liberty Status

Network Layer 2 Virtual Switch Nexus 1000v StackForge Networking-Cisco KiloOpenStack Cisco Networking Liberty

Preview

SR-IOV, non-SR-IOV

UCS Fabric Interconnect

StackForge Networking-Cisco KiloOpenStack Cisco Networking Liberty

Preview

Physical Switch Nexus StackForge Networking-Cisco KiloOpenStack Cisco Networking Liberty

Preview

DHCPIPAM

Prime Network Registrar

Not upstreamPreview

Network Layer 3 Virtual RouterCloud Services Router 1000v

StackForge Networking-Cisco KiloOpenStack Cisco Networking Liberty

Preview

Physical Router ASR 1000 Not upstreamOpenStack Cisco Networking Liberty

Preview

Network ServicesVirtual Firewall and VPN

Cloud Services Router 1000v

Firewall – OpenStack Neutron Firewall KiloVPN- OpenStack Neutron VPN Kilo

Firewall – OpenStack Neutron Firewall LibertyVPN- OpenStack Neutron VPN Liberty

Preview

Network Layer2, Layer3, Services

ControllerApplication Policy Infrastructure Controller

APIC L2 – StackForge Networking-Cisco KiloAPIC L3 – StackForge Networking-Cisco Kilo

APIC L2 – OpenStack Cisco Networking LibertyAPIC L3 – OpenStack Cisco Networking Liberty

Released

Declarative Policy Model

Group Based Policy Framework

Group Based PolicyOpenStack Group Based Policy Kilo

OpenStack Group Based Policy Liberty Released

Summary of OpenStack integration with Cisco Networking Solutions Presented

59

Summary

• OpenStack rapidly becoming the de-facto standard for data center orchestration

• Cisco’s broad-based OpenStack strategy spans products, partners and services

• Cisco is leading contribution in projects such as Neutron and others in the OpenStack community

• Wide range of Cisco solutions available for integration with OpenStack Networking

• Still lots to do…..

• More information can be found at

• www.cisco.com/go/openstack

• https://developer.cisco.com/openstack/

60

Collateral Release Date

Deploying RedHat Enterprise Linux OpenStack Platform 3.0 on Flexpod with Cisco UCS, Cisco Nexus and NetApp Storage Nov 2013

Suse Cloud Integration with Cisco UCS and Cisco Nexus Platforms March 2014

Accelerate Cloud Initiatives with Cisco UCS and Ubuntu OpenStack May 2014

Ubuntu OpenStack Architecture on Cisco UCS Platform June 2014

RedHat Enterprise Linux OpenStack Platform 4.0 on Cisco UCS and Cisco Nexus July 2014

Hadoop as a Service (HaaS) with Cisco UCS Common Platform Architecture (CPA v2) for Big Data and OpenStack August 2014

RedHat OpenStack Architecture on Cisco UCS Platform Sept 2014

InterCloud Data Center ACI 1.0 Implementation Guide Feb 2015

FlexPod Datacenter with Red Hat Enterprise Linux OpenStack Platform Sept 2015

Partner OpenStack Distributions on Cisco Infrastructure

61

Call to Action

• Visit the World of Solutions for

• Cisco Campus

• Walk in Labs

• Technical Solution Clinics

• Meet the Engineer

• Lunch and Learn Topics

• DevNet zone related sessions

62

Complete Your Online Session Evaluation

• Please complete your online sessionevaluations after each session.Complete 4 session evaluations& the Overall Conference Evaluation(available from Thursday)to receive your Cisco Live T-shirt.

• All surveys can be completed viathe Cisco Live Mobile App or theCommunication Stations

63

Thank you

64