49
1 Data Center Networking Introduction

Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

1

Data Center Networking Introduction

Page 2: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

2

What makes a data center network?

– Local Area Network (LAN) / Campus Networks

– Same geographical location, building, campus etc.

– Wired and wireless network connects users, IP phones and wireless APs

– Typical features required: POE, 802.1X etc

– Data Center Networks

– Same geographical location

– Connects Servers/VMs/Containers, applications, storage, firewalls/ load

balancers, etc. – wired connectivity

– Stable, low latency fabrics with high availability / high performance and

throughput / density and scale

– Build revenue for business (E-Commerce)!

– Typical features required: VXLAN/EVPN, BGP, OSPF, DCB, etc..

– Focus on improving East - West traffic between racks

Spines

Leafs

Page 3: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

3

What is a network fabric?

Marketing term

– Optimally interconnect 1,000, 10,000, 100,000 or more end points (servers, storage)

– Provide redundancy when any node or any link fails

– Failure will happen – it’s just a question of time

– Minimize # hops to reach any other peer in the fabric

– Latency impact

– East/West (E/W) traffic vs North/South (N/S) traffic

– E/W traffic = Servers to servers inside the DC

– N/S traffic = Clients to servers entering / servers to clients leaving the DC

Page 4: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

4

Data Center Networking Architectures

Page 5: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

5

Scalability, Agility, Orchestration

Enterprise Datacenter Network Architecture Evolution

* VXLAN connections are created automatically, on demand, between leaf Switches/vSwitches. ** Network Virtualization – VMware NSX, OpenStack, HPE Distributed Cloud Networking/Nuage

L3

L2

LACP

Classic / Underlay VxLAN Overlay

SW VTEPs

SW & HW VTEPs

HW VTEPs

Optimized L2/L3 FabricIRF/VSX MLAG

L3 Fabric

Spine&Leaf L3 ECMPVXLAN* ,EVPN & Network Virtualization**

VMs VMs

vSwitch vSwitch

Spine&Leaf L2 ECMPTRILL/SPB

L2 Fabric

Traditional 3 layer STP

Page 6: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

6

Does Every DCN Solution = Spine/Leaf?

L2

Spines

Leafs

L3

1-Tier Data Center

Core

L2

L3

L2

Agg

Access

L3

…Core

Multi-Tier Data Center

– Spine = Multiple individual backbone devices that provide redundant connectivity for each leaf

– Leaf = Switch which connects to every spine switch (can be VTEP but not mandatory), provides entry into equidistant networks with no constraints on workload placement

– Core = A single device (logical or physical) that provides centralized connectivity to other devices (servers/switches)

– Aggregation = Aggregates multiple access switches – usually performs L2/L3 services

– Access = Typically connects into a Core or Aggregation device – usually running L2 services

– ToR = Umbrella term, referring to a switch located at the Top-of-Rack

Page 7: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

7

EoR / MoR (End of Row / Middle of Row)

– EoR/MoR refers to physical location of switches where switches are placed in one rack

– Server-to-switch cables stretch from rack to rack, usually requires less equipment than ToR deployment

– Usually lower latency for intra-row traffic because of less hops

– Less problem isolation, less scalability

– EoR/MoR could be spine switches which connect to ToRs within the same Row

– Can be considered as 1 POD, replicate design to scale up multiple PODs

EoR MoR

Data center spine/core/WAN edge

EoR MoR

Data center spine/core/WAN edge

ToR ToR ToR ToR ToR ToR ToR ToR

Page 8: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

9

Leaf 64Leaf 33

40/100G 40/100G

ISL MLAGISL

Data Center Fabric (Spine-Leaf)Scaling up the leafs

40/100G

32 x 100G

Leaf 1 Leaf 32

40/100G

Question: What determines the number of leafs supported in a spine leaf topology?

Answer: The number of ports supported in the spine switch.Answer: The number of physical ports supported in a single spine switch.

– Every leaf needs to connect to every spine

– Recommendation is not to use VSX / IRF on spines

Page 9: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

10

Data Center Fabric (Spine-Leaf)Understanding oversubscription

Spine-1 Spine-2 Spine-3 Spine-4

40/100G40/100G

40/100G

40/100G160G / 400G

(4x40G / 4x100G)

Leaf Leaf

• Scale of the fabric defined by the density of the spine switch

• Fabric bandwidth can be increased by adding more spine switches

48 x 10G ports = 480G

– 40G Uplinks = 3:1 Oversubscription

(480G/160G = 3)

– 100G Uplinks = 1.2:1 Oversubscription (480G/400G = 1.2)

48 x 25G ports = 1,200

– 40G Uplinks = 7.5:1 Oversubscription

(1,200G/160G = 7.5)

– 100G Uplinks = 3:1 Oversubscription (1,200G/400G = 3)

40/100G

Page 10: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

12

Aruba Data Center NetworksBenefits or modern DCs

– A stable, low latency fabric with high availability/ performance/ density/ scalability

– N/S campus/client traffic connectivity achieved via border switches (service leafs) /routers

– L2 extension between racks: Essentially driven by VM mobility

– VXLAN as de-facto solution by many overlay vendors

– Scalable, up to 16M Virtual Network Identifier (VNIs) to support multi-tenancy

– Oversubscription

– Spine and leaf for fewer layers and reduced hop count / latency / oversubscription levels

– Designed for E/W application traffic performance (80% of traffic is EW)

– Mac Address Explosion

– DC fabric becomes a big L3 domain (no STP) with L2 processing (encapsulation / de-capsulation) at the edge

Page 11: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

13

Data Center Networking Portfolio

Page 12: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

14

Addressable Market for Aruba Switching will Double from CY18

8.9 9.0 9.3 9.5 9.6 9.7

4.0 4.1 4.2 4.3 4.4

6.1 5.7 5.2 4.5

3.4 4.0 4.7

1.6

20222018 2019 20202017

13.0

Telco

2021

DC for Tier 2 Cloud

23.1

DC in the Enterprise

Campus Core & Agg

Campus Access

19.4

22.8

24.9

8.9

Note: Excludes hyper-scale data center TAMSource: Dell’Oro (Worldwide Datacenter Ethernet Switching Revenue 2016-22); HPE Market Model

Aru

ba

po

rtfo

lio b

rea

dth

and s

tre

ngth

2015+

2017+

2019+

2020+

TAM ($B)

Today

Page 13: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

16

High Level Selection Considerations

Consistency with campusAnalytics, automation, and simplicity

Interest in CX Innovations

Traditional requirementsSoftware feature depth

Unique requirements & integrations

ArubaFlexFabric

Page 14: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

18

FY19 DC Portfolio: FlexFabric Options

Sp

ine

Leaf

12900E Series: 4, 8, 16 slots

12901E Series

5710 Series 1/10GbE

– 40GbE up ToR / server iLO

594x fixed/modular 10/40GbE

5950 fixed/modular 10/25/50/100GbE

1/10GbE ToR, price/perf 1-100GbE fixed and modular ToR flexibility

Compact, cost effective, 100GbE (small core/spine) Highest density, 25/100GbE flexibility and features

5980 advanced 10/100GbE Storage/HPC ToR

5980 advanced 10/100GbE5950 32 * 100G

12902E Series

FlexFabric Portfolio

Page 15: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

19

FlexFabric Leaf Options

HPE FlexFabric

5710 series

HPE FlexFabric

5940 and 5945 series

– 1/10 GbE downlinks x

40/100G uplinks

– Low-latency, high-availability

connectivity

– Perfect for out-of-band

management (iLO)

connectivity

– 10 or 25 GbE downlinks x

40/100G uplinks

– VXLAN support for network

virtualization

– Low-latency, high-availability

connectivity

– Enhanced support for

telemetry

HPE FlexFabric

5980 series

– Full data path error detection

– 1/10 GbE downlinks x 100G

uplinks

– VXLAN support for network

virtualization

– Deep buffers to ensure

network connectivity

– Flexible port configurations

Page 16: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

20

FlexFabric 12900E FamilyChassis Sizes to Match Capacity Needs

Up to 120 TbpsSwitching Capacity

Up to 768Concurrent EVPN Sessions

Up to 307225 GbE ports

– Mix and match switches to meet local needs

– Ideal for east-west traffic and reducing the number of tiers

– Leaf-spine increases resiliency and reduces complexity for traffic policies

– Upgrade to 10/25GbE at the edge

– Future-proof with 100GbE in the core today and 400GbE in the future

12904E 12908E 12916E12902E12901E

Page 17: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

21

Aruba 8320

Aruba 8400

Mo

bile

Fir

st

Ca

mp

us

Ne

two

rkin

gC

ore

Op

po

Rtu

nis

tic

ally

Sp

ine

/ L

ea

f

Future proof wired infrastructure, WLAN and IoT enabling

Highly scalable, programmable automated Data Center solution

User, device, server aware – ZTP ease of deployment

Aruba Core & Datacenter Switching: Powered by CX Innovations

Aruba 8325

Page 18: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

22

Modernizing Campus Core, Aggregation and Data Center

Aruba 8000 Series

with ArubaOS-CX

Aruba 8400

• Highest reliability, flexibility, performance, port density

• 19 Tbps system, 8-slot chassis

• Redundancy everywhere: Mgmt.

Module, Fabric, Power, Fans

• Up to 512 10GbE, 128 40GbE, 96 100GbE in a 2-chassis pair

Aruba 8320

• Workhorse for mid-size core/aggregation use cases

• 2.5 Tbps system, 1RU

• N+1 redundant hot swappable

power supplies, fans

• Three models: 48 x 10GbE, 48 x 10GBASE-T, 32 x 40GbE

Aruba 8325

• Mid-size core/aggregation use cases and DC ToR or EoR

• 6.4 Tbps system, 1RU

• N+1 redundant hot swappable power supplies, fans

• 32 ports of 40/100 GbE or

• 48 ports of 10/25 GbE and 8 ports of 40//100 GbE

• Front to back or back to front airflow

Page 19: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

23

Typical Customer Deployments – Products and features

Collapsed 1-TierIRF/VSX/ MLAG

#Servers: 50-100 ~ 100 - 500 ~500 – 2000 ~ 2000+VM Scale: 5000+ 25,000 – 50,000+ 100,000 – 500,000 + 750,000 +

Customer Small Server Rooms Small to Medium Data Centers Medium to Large DC Large and Complex Data Centers Persona K-12 School Districts Education, Local Gov., Retail Enterprises, Universities Financial Services, Large Enterprises

Features: L2, MCLAG, VSX, DCB, L2, VSX (MCLAG + Config Sync), ECMP, L3 Routing, IPv6, VSX, VXLAN with MP-BGP EVPN, ECMP, API Integration DCB, API Integration DCB, NSX, API Integration L3, VSX, DCB, NSX, API Integration***

Optimized L2 FabricIRF/VSX MLAG

Optimized L3 FabricIRF/VSX MLAG

L3 Fabric

Spine&Leaf L3 ECMPVXLAN* & EVPN

VMs VMs

vSwitch vSwitch

Products: Aruba 8400* Core: Aruba 8400/FF 5950/12900E Spine: Aruba 8400/FF 5950/12900E Spine: Aruba 8400/FF 5950/12900E Access: Aruba 83xx/FF 57XX/59XX. Leaf: Aruba 83xx/FF 57XX/59XX Leaf: Aruba 8325**/FF 57XX/59XX

*Aruba 8400 will cover majority of the use-cases. **Aruba 8325 supports Static VXLAN. ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items

- Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core.- Position 5940/5950/12900E for FC requirements. - Position 59XX/12900E for FCoE requirements

Page 20: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

24

Scalability, Agility, Mobility

Enterprise Data Center Aruba Network Architecture Evolution*

Phase 1: 1H FY19

Phase 2:2H FY19

*This is an Aruba portfolio view

Collapsed 1-TierIRF/VSX/ MLAG

Optimized L2 FabricIRF/VSX MLAG

Optimized L3 FabricIRF/VSX MLAG

L3 Fabric

Spine&Leaf L3 ECMPVXLAN* & EVPN

VMs VMs

vSwitch vSwitch

Page 21: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

25

Management and Orchestration

AirWave

Unified multi-vendor

wired + wireless network management

Core, Aggregation and Data Center

NAE

Flexible troubleshooting and automated root

cause analytics simplify and enhance visibility

and control

NetEdit

Scalable, Simple

CLI-based Orchestration

IMC

Advanced wired management

Page 22: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

26

Data Center Networking Technologies

Page 23: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

27

Connecting Servers and Switches

Page 24: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

28

Connecting Servers/Endpoints

Single Attached

– No HW networking redundancy

– Fewer cables/NICs

– Example: partitioned replica set solutions like Hadoop or MongoDB

– i.e. – if single server looses connectivity the application and data is still accessible to clients

Dual attached

– Link redundancy

– Higher bandwidth

– Multiple NICs can provide NIC redundancy

– More cables/NICs

– LACP recommended to detect required links

– Example: environments that require more bandwidth

– Switch/Link/NIC redundancy

– Higher bandwidth/performance with no downtime via upgrades

– SW and HW redundancy with active-active L2/L3

– Simplified management

– LACP recommended to detect required links

– Example: VMware environments that require hosts always online

Switch

Server

Dual switch attached

Page 25: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

29

– Active / Passive

– Servers normally don‘t care about exit

– Suboptimal traffic flow

– High Inter-Switch-Link load

– Only one exit is usable(MAC flapping)

– LACP

– Servers can use both links

– Optimal traffic flow

– No Inter-Switch-Link load

Topology ExamplesSingle Server Connectivity – Active/Passive vs. LACP

Active / Passive LACP

Active

Backup

Page 26: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

30

VSXMCLAG

vPCMLAG

Switch Virtualization Comparison

Chassis 1 Chassis 2

Management

Control

Routing

Chassis 1 Chassis 2

Management

Control

Routing

Ethernet Links

Shared

Management

Control

Routing

Ethernet Links

Shared

SYNC?SYNC

IRF VSFVSS

Virtual Chassis

IRF / VSX

ISL

VSX is ideal solution providing minimal scope of outage, each box operates separately in concert

Single control plane Dual control plane

Page 27: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

31

Comparison of Virtualization Solutions

Features Aruba 8400(1)/8320

(with VSX)

FlexFabric

129xx/59xx/57xx

(with IRF)

Cisco Nexus

3/5/7/9xxx

(with vPC)

Arista 7xxx

(with MLAG)

IP’s for Switch Management 2 1 2 2

Control Planes 2 1 2 2

HA during upgrades Built in by designISSU within major

code branchesBuilt in by design No

Active-Active Unicast &

Multicast(2)Supported Supported Supported No

Config Simplicity and

TroubleshootingExtensive Support Single config Limited No support

MC Port-channel/LAG L2 and L3 L2 and L3L2 Only

(except 5K)L2

First Hop RedundancyEliminates need for

VRRP – less configuration

Eliminates need for

VRRP – less configuration

Needs VRRP/HSRPNeeds proprietary

virtual ARP feature

(1) Requires dual supervisor in each chassis. (2) Intended for a future software release

Page 28: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

32

Aruba CX Major Release versus Minor ReleaseUpgrade Scenarios

Upgrade between Minor

Releases

Upgrade between Major

Releases (ISLP comp.)

Versions Path Example from 10.1.0008 to 10.1.0015 from 10.1.0018 to 10.3.0009

ISLP Version Compatibility Yes Yes

When one switch is upgraded and reboots, it

re-joins the current VSX pair.Yes Yes

VSX pair operates with different SW release

for a transition periodYes Yes

Maximum Impact during upgrade steps ~ 300ms (unicast) ~ 300ms (unicast)

Goal cumulated Impact for complete upgrade <1s (unicast) <1s (unicast)

Page 29: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

33

Aruba CX Upgrade Process Details

Start with VSX Secondary node:

• copy tftp://.. primary

• boot system primary

– Routing protocols graceful shutdown

– While SW2 reboots, SW1 is forwarding

Step 0

Step 1

– As soon as SW2 is back from reboot, VSX is In-Sync state

– SW1 is forwarding

– SW2 is learning (LACP, MAC, ARP, routing) and linkup-delay

– SW2 is forwarding. VSX-sync is stopped.

Step 2

Step 3

Step 4

ISL

VSX

primary secondary1 2

VSX LAG

VSX LAG

VSX

primary secondary1 2

VSX LAG

VSX LAG

ISL

VSX

primary secondary1 2

VSX LAG

VSX LAG

VSX

primary secondary1 2

VSX LAG

VSX LAG

ISL

VSX

primary secondary1 2

VSX LAG

VSX LAG

Finish with VSX Primary node:

• copy tftp://.. primary

• boot system primary

– Routing protocols graceful shutdown

– While SW1 reboots, SW2 is forwarding

– As soon as SW1 is back from reboot, VSX is In-Sync state.

– SW2 is forwarding

– SW1 is learning (LACP, MAC, ARP, routing) and linkup-delay

– SW1 is forwarding. VSX-sync is running.

– Routing protocols nominal metrics restore

UP-to-DOWN < 667 ms

DOWN-to-UP < 1775 ms

UP-to-DOWN < 18 ms

DOWN-to-UP < 1551 ms

– SW1 and SW2 are forwarding.

– VSX-sync is running

Page 30: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

34

FlexFabric Upgrade Scenarios

Compatible Incompatible

4 steps:– issu load file … slot <slave>– issu run switchover

– issu accept– issu commit slot <x>

4 steps:– boot-loader … slot 1– reboot slot 1

– boot-loader … slot 2– reboot slot 2

3 reboots:– Original IRF master: 2 reboots– Original IRF slave: 1 reboot

2 reboots:– Original IRF master: 1 reboot– Original IRF slave: 1 reboot

The ISSU run switchover command reboots the current

master with the old software version, causing the

upgraded subordinate member to be elected as the new

master.

Brute force method.

IRF will not reform so there is no switchover option

Page 31: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

35

FlexFabric Compatible upgradeProcess details

IRF1 2master slaveSW1 and SW2 are forwarding.

Initial state: Unit 1 is master.

Note: Unit 1 can also be slave and Unit 2 master.

LACP

LACP

IRF1 2

LACP

LACP

Start ISSU with member 2:

– issu load file … slot <slave>

While 2 reboots, 1 is forwarding

Step 0

Step 1

IRF1 2

LACP

Switch master/slave:

– issu run switchover

While 1 reboots, 2 is forwarding and master

Step 2

LACP

IRF1 2

LACP

LACP

Step 3

IRF1 2

LACP

LACP

Step 4 Initial state: version R24xx (new code). Unit 2 is master

master

master

master

slave

slaveComplete ISSU on member 1:

– issu commit slot 1<new slave>

While 1 reboots, 2 is forwarding and master

masterslave

UP-to-DOWN < 5 ms

DOWN-to-UP < 11 ms

UP-to-DOWN < 14 ms

DOWN-to-UP < 6 ms

Page 32: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

36

FlexFabric Incompatible upgradeProcess details

Initial state: version R23xx. Unit 1 is master.

Note: Unit 1 can also be slave and Unit 2 master.

– upload new firmware on flash of IRF members

Start with IRF member 1:

– boot-loader … slot 1

– reboot slot 1

While 1 reboots, 2 is master and forwarding

Step 0

Step 1

As soon as unit 1 is back from reboot, MAD occurs

As unit 1 is the lower member ID, it becomes new master and forwarding

Unit 2 enters MAD recovery state and shutdown all its ports

except IRF ones

Control plane switchover leads to reset of all routing peerings

Step 2master

MAD faulty state

Step 3– boot-loader … slot 2

– reboot slot 2

Step 4 Initial state: version R24xx. Unit 1 is master

IRF1 2master slave

LACP

LACP

IRF1 2

LACP

LACP

IRF1 2

LACP

LACP

IRF1 2

LACP

LACP

IRF1 2

LACP

LACP

master

masterMAD recovery state

master

master slave

UP-to-DOWN < 5 ms

DOWN-to-UP < 254 ms

UP-to-DOWN < 14 ms

DOWN-to-UP <5ms

Page 33: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

37

Connectivitry Options

Page 34: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

38

Connectivity OptionsForm

FactorSpeed Connectivity Options Type of Fibers/Conductors

Optical

Connector.

BaseT10/100/1000M

RJ-45Cat 5 and up n/a

10GbE Cat 6a (55M) / Cat 6a and up (100m) n/a

SFP 1GbESX 2-strand MMF (multi-mode fiber) LC

LX, LH 2-strand SMF (single-mode fiber) LC

SFP+ 10GbE

SR, LRM 2-strand MMF LC

LR, ER 2-strand SMF LC

Copper Direct Attach Cable (DAC) 4-conductor twinax copper n/a

Active Optical Cable (AOC) Fixed MMF cable n/a

SFP28 25GbE

SR 2-strand MMF LC

Copper DAC 4-conductor twinax copper n/a

AOC Fixed MMF cable n/a

QSFP+ 40GbE

SR4, eSR4 8-strand MMF MPO

LR4, ER4 2-strand SMF LC

BiDi 2-strand MMF LC

Copper DAC 16-conductor twinax copper n/a

AOC Fixed MMF cable n/a

QSFP28 100GbE

SR4 8-strand MMF MPO

LR4, ER4 2-strand SMF LC

BiDi 2-strand MMF LC

Copper DAC 16-conductor twinax copper n/a

AOC Fixed MMF cable n/a

Page 35: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

39

How do we achieve 40GbE/100GbE?Multiple lanes bundled into single link

10Gb

10Gb

10Gb

10Gb

Se

rve

r Sw

itch

Today’s 4 lane 10GbE offeringsefficiently scale to 40GbE

10 Gb

10 Gb

10 Gb

10 Gb

Serv

er S

witc

h10 Gb

10 Gb

10 Gb

10 Gb

10 Gb

10 Gb

A 10GbE lane solutionDOES NOT efficiently scale to 100GbE

25 Gb

25 Gb

25 Gb

25 Gb

Serv

er S

witc

h

25GbE provides a seamless and far more efficient migration to 100GbE

Page 36: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

40

Splitting Ports

– 40GbE/100GbE ports can be split to leverage individual 10/25GbE lanes

– “Splitter cable” DACs available:

– QSFP+ (40GbE) to 4 x SFP+ (10GbE) DAC = “using tengige”

– QSFP28 (100GbE) to 4 x SFP28 (25GbE/10GbE) DAC = “using twenty-fivegige”

– QSFP+ (40GbE) to 4 x SFP+ (10GbE) optical splitter options use:

– QSFP+ SR4 (MPO) > 40GbE MPO x 4 10GbE LC Cable (K2Q46A)

– QSFP+ 40GbE SR4 MPO Optic

– SFP+ 10GbE SR LC Optic

– QSFP28 (100GbE) to 4 x SFP28 (25/10GbE) optical splitter options use:

– QSFP28 SR4 (MPO) > 40GbE MPO x 4 10GbE LC Cable (K2Q46A)

– QSFP28 40GbE SR4 MPO Optic

– SFP28 25/10GbE SR LC Optic

Page 37: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

41

Which ports can split?

– The ability to split ports is dependent on PHY used in port

– Not all PHYs are equal - not all 40/100GbE ports can be split

– See FlexFabric Splitting Ports doc on Arubapedia, Iris, and Configuration Guides

Page 38: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

42

Emerging MSAs & Consortiums

– Focused on optimizations for 25G & 50G per-lane based physical layers

– 25, 50, 100, 200, and 400 GbE

– RCx MSA

– microQSFP MSA

– QSFP-DD Consortium

RCx microQSFP QSFP-DD

Page 39: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

43

Take the express lane with 25/100GbESolutions across HPE server, firmware, NICs, 25/100 GbE fabric

HPE 25GbE

Network Adapters

HPE ProLiant DL/ML, Blade servers HPE access/leaf switches

HPE FlexFabric 5950/5945

HPE Aruba 8325

HPE core/spine switches

HPE FlexFabric 129xx

HPE Aruba 8400

Notes:

– 25-GbE interfaces can work at 25, 10, or 1 Gbps.

– 25-GE interfaces do not support speed or duplex mode autonegotiation.

– Must manually configure speed / duplex to ensure interfaces at both ends have the same speed and duplex mode settings.

Page 40: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

44

Media (Channels) for IEEE 25GbE Spec

– Backplane: 25GBase-KR (<30”)

– Autoneg between 10Gb & 40Gb

– Next Gen Blade Servers will be plumbed for 25Gb-KR

– Passive DAC (Twinax) Cable: 25GBase-CR (<= 3m & 5m)

– <2m may not need any FEC. Lowest Latency

– 3m requires Base-R FEC (clause 74)

– >3m requires RS-FEC (clause 108)

– RCx Copper Cables (consortium MSA)

– Low-cost high-density 25G electrical cable & connector set

– Optic MMF (OM4): 25GBase-SR (<=100m)

– Optic SMF: 25GBase-LR (<=10km) – draft

– Optic SMF: 25GBase-ER (<=40km) – draft

QSFP28 to

x4 SFP28

Page 41: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

45

Switch upgrade (Incremental)Consistent transceiver form factor 10G/40G > 25G/100G allows seamless transition

40G TOR Switch

(10G IO)

1

4

2

4

8

10G

Servers

10G

10G

1 96

10GE

40GE

To Spine

4

100G TOR Switch

(10/25G IO)

1

4

2

4

8

10G

Servers

10G

10G

1 96

10GE

40GE

To Spine

100G TOR Switch

(10/25G IO)

1

4

2

4

8

10G

Servers

10G

25G

1 96

25GE

100GE

To Spine

40GE

10GE

100G TOR Switch

(10/25G IO)

1

4

2

4

8

25GG

Servers

25GG

25G

1 96

25GE

100GE

To Spine

4

Day 1 (starting point)

– 8 x 40GE up

– 96 x 10GE down

Install new TOR !

– 8 x 40GE up

– 96 x 10GE down

– No network capacity

loss !

Incrementally upgrade

uplinks to 100GE and

downlinks to 25GE

– Hybrid configuration

Upgrade complete!

– 8 x 100GE up

– 96 x 25 GE down

Page 42: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

46

Buffers?

Page 43: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

47

BuffersDo they matter? And where?

2.4:1 oversubscription

100G

32 x 100G

Leaf 1 Leaf 32

200G (2 x 100G)

480G (48 x 10G)

100G

1:1 oversubscription

100G

32 x 100G

Leaf 1 Leaf 32

200G (2 x 100G)

200G (20 x 10G)

100G

– Congestion will exist in TCP/Ethernet Networks. Buffers are used to help address congestion

– Congestion Collapse prevents or limits useful communication / Bufferbloat can cause excess packets in switch buffers

– Spine experiences same traffic load in both scenarios – uneven flows can cause congestion at spine

– Spine switches experience more consistent traffic flows with possible congestion

– Use techniques like WRED and RED end-to-end to address congestion situations

Page 44: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

48

BuffersDeep Buffer vs Standard Buffer Switches

To profile an application answer these 3 questions:

– Does the application utilize close to 100% of line rate for sustained periods?

Consider Deep Buffer:

– Where loss sensitive applications are mixed with bursty applications.

– Large amount of Elephant flows which can fill out buffers and starve mice flows.

– Usually greater than 1GB

Use Standard Buffer for:

– Low latency

– A network where there is a lower oversubscription rates.

– Streaming applications where bandwidth is relatively constant.

– Usually smaller than 30MB

Page 45: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

74

ArubaOS-CX & HPE FlexFabric Interop

Can customers integrate ArubaOS-CX switches with HPE FlexFabric?

Refer to ArubaOS-CX & HPE FlexFabric Interop guide on Arubapedia for tested features

Page 46: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

75

Interop Test CasesInterop test cases listed are focused on features used in Data Center Networks.

LLDP

• LLDP Test #1 – Neighbor Detection

STP

• STP Test #1 – Loop Prevention

LACP

• LACP Test #1 – L2 Dynamic link aggregation

• LACP Test #2 – L3 Dynamic link aggregation

• LACP Test #3 – L2 Dynamic link aggregation (VSX & IRF)

OSPF

• OSPF Test #1 – L3 Network advertisement and reachability

• OSPF Test #2 – BFD Interop

BGP

• BGP Test #1 – IBGP network advertisement and reachability

• BGP Test #2 – EBGP network advertisement and reachability

• BGP Test #3 – BFD Interop

ArubaOS-CX Leafs & HPE FlexFabric Spines in an L3 fabric

• Test #1 – ArubaOS-CX Leafs with VSX

Additional test cases will be added in future as required

Page 47: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

76

ArubaOS-CX VSX Leafs with 12900E Spines

• Expected test result: EBGP neighbors should form, networks should be advertised and received. L3 network

connectivity between servers in different racks should work.

• Final test result: Works as expected

• Note: In a production deployment, it is recommend that each physical leaf switch utilize multiple uplinks to

different spines

Rack 12 Rack 14

12904-212904-1

8320-1 8320-2

AS#65001 AS#65002

AS#65003 AS#65004AS#650051/1/15

Server 2 -

192.168.12.10/24

VSX active gateways -

192.168.12.1/24

IRF 5940-3

Server 4 -

192.168.14.10/24

Default gateway -

192.168.14.1/24

/31 links between leafs/spines

Leaf

switches

Spine

switches

Page 48: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

77

Questions?

Page 49: Data Center Networking Introduction · ***BGP EVPN, NSX, DCN and DCB are Aruba roadmap items - Position 12901/2 for Small Spine/Core and 12904/8/16 for high density Spine/Core. -

78

Thank You!