5.Evolution of Hierarchical Network Design for the Data Center

Embed Size (px)

Citation preview

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    1/89

    1

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    Evolution of Hierarchical NetworkDesign for the Data Center

    BRKDCT-2961

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    2/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 2

    Impact of new features and products on hierarchicaldesign for data center networks

    Data center services insertion

    Layer 3 features and best practices Layer 2 features, enhancements and best practices

    Virtualized Access Layer Design

    What We Will Cover In This Session:

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    3/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 3

    Data Center

    Core

    Enterprise Network

    Aggregation

    Access

    Layer 3 Links

    Layer 2 Trunks

    Hierarchical Design Network Layers:

    Data Center Core

    Routed layer which is distinct fromenterprise network core

    Provides scalability to build multipleaggregation blocks

    Aggregation Layer

    Provides the boundary between layer-3routing and layer-2 switching

    Point of connectivity for service devices(firewall, SLB, etc.)

    Access Layer

    Provides point of connectivity forservers and shared resources

    Typically layer-2 switching

    Defining the Terms

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    4/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 4

    Data Center

    Core

    Enterprise Network

    Multiple Aggregation

    Blocks/Pods

    Scaling the Topology With a DedicatedData Center Core

    A dedicated Data Center Core provides layer-3insulation from the rest of the network

    Switch port density in the DC Core is reserved forscaling additional DC Aggregation blocks or pods

    Provides single point of DC route summarization

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    5/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 5

    What Makes Designing Networks forthe Data Center Different?

    Extremely high density of endnodes and switching

    Power, cooling, and spacemanagement constraints

    Mobility of servers a requirement,

    without DHCP

    The most critical shared end-nodesin the network: high availabilityrequired with very small servicewindows

    Multiple logical multi-tierapplication architectures built ontop of a common physical topology

    Server load balancing, firewall,other services required

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    6/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 6

    Data Center

    Core

    Aggregation

    Access

    Server Pod

    Network Equipment

    Network Rack

    Server Rack

    Mapping Network Topology tothe Physical Design

    Design the Data Center topology in a consistent, modularfashion for ease of scalability, support, and troubleshooting

    Use a pod definition to map an aggregation block or otherbounded unit of the network topology to a single pod

    The server access connectivity model can dictate port countrequirements in the aggregation and affect the entire design

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    7/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 7

    Traditional Data Center ServerAccess Models

    End-of-Row (EoR)

    High density chassis switch at end or middle ofa row of racks, fewer overall switches

    Provides port scalability and local switching, maycreate cable management challenges

    Top-of-Rack (ToR)

    Small fixed or modular switch at the top ofeach rack, more devices to manage

    Significantly reduces bulk of cable by keepingconnections local to rack or adjacent rack

    Integrated Switching

    Switches integrated directly into blade server chassis enclosure

    Maintaining feature consistency is critical to network management,sometimes pass-through modules are used

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    8/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 8

    Data Center

    Core

    Enterprise Network

    Aggregation

    Access

    Client-Server

    FlowMulti-tier,

    Server-to-Server

    Flow

    Planning for Network Oversubscription

    Networks may be designed withnon-blocking switches and modules,but oversubscription providescost savings

    Oversubscription considerations

    include:Switch-level, across layers of the network

    I/O module level, consider dedicated-modeports on Nexus 7000 for aggregationport channel

    Multi-tier application flows may be

    a good percentage of traffic andmay take a different path

    Key metric is a servers path to theAggregation layer which accounts for bothClient-Server and Server-to-Server paths

    Service modules and appliances may be the

    actual limiting factor of raw throughput

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    9/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 9

    Impact of New Features and ProductsOn Hierarchical Design for

    Data Center Networks

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    10/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 10

    Aggregation

    Access

    Virtual-

    Access

    Nexus 2000 Nexus 1000v

    Data Center

    Core

    Layer 3 LinksLayer 2 Trunks

    VMs

    Nexus 4000

    Building the Access Layer usingVirtualized Switching

    Virtual Access Layer

    Still a single logical tier oflayer-2 switching

    Common control plane withvirtual hardware and software based I/O

    modules

    Nexus 4000

    BladeSwitch Multi-hop capable with FIP-Snooping mode

    Nexus 2000

    Switching fabric extender module

    Acts as a virtual I/O module supervised byNexus 5000

    Nexus 1000v

    Software-based Virtual Distributed Switch forserver virtualization environments.

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    11/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 11

    LAN

    Server

    Access

    SAN

    IP Data and Storage

    Aggregation

    Ethernet

    Fibre ChannelEthernet plus FCoE

    Migration to a Unified Fabric at theAccess Supporting Data and Storage

    Nexus 5000 Series switches support integration of both IP dataand Fibre Channel over Ethernet at the network edge

    FCoE traffic may be broken out on native Fibre Channel interfaces fromthe Nexus 5000 to connect to the Storage Area Network (SAN)

    Servers require Converged Network Adapters (CNAs) to consolidate this

    communication over one interface, saving on cabling and power

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    12/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 12

    A cohesive system including a virtualized layer-2 access layer supportingunified fabric with central management and provisioning

    Optimized for greater flexibility and ease of rapid server deployment in aserver virtualization environment

    From a topology perspective, similar to the Nexus 5000 and 2000 series

    Cisco Unified Computing System (UCS)

    LAN

    Unified

    Computing

    Virtual

    Access

    SAN

    IP Data

    Aggregation

    Ethernet

    Fibre ChannelUCS FEX Uplinks

    Dual SAN

    Fabrics

    UCS 6100 Series

    Fabric Interconnects

    UCS 5100 Enclosure

    UCS B-Series Servers

    UCS 2100 Fabric Extenders

    UCS I/O Adapters

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    13/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 13

    Nexus 7000 Series Virtual DeviceContexts (VDCs)

    Virtualization of the Nexus 7000 Series Chassis

    Up to 4 separate virtual switches from a singlephysical chassis with common supervisor module(s)

    Separate control plane instances andmanagement/CLI for each virtual switch

    Interfaces only belong to one of the active VDCsin the chassis, external connectivity required topass traffic between VDCs of the same switch

    Designing with VDCs

    VDCs serve a role in the topology similar to a

    physical switch; core, aggregation, or accessMultiple VDC example topologies have beenvalidated within Cisco by ESE and other teams

    Two VDCs from the same physical switch shouldnot be used to build a redundant networklayer physical redundancy is more robust

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    14/89

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    15/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 15

    Core

    Aggregation VDC

    Access

    Sub-Aggregation

    VDC

    6500

    Services

    Chassis

    Enterprise Network

    Virtual Device Context Example:

    Multiple VDCs used to sandwichservices between switching layers

    Allows services to remain transparent(layer-2) with routing provided by VDCs

    May be leveraged to support both

    services chassis and appliances

    Design considerations:

    Access switches requiring services areconnected to sub-aggregation VDC

    Access switches not requiring services

    may be connected to aggregation VDC

    Allows firewall implementations not toshare interfaces for ingress and egress

    Facilitates virtualized services byusing multiple VRF instances inthe sub-aggregation VDC

    Services VDC Sandwich

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    16/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 16

    Data Center Service Insertion

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    17/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 17

    Data Center

    Core

    Aggregation

    Access

    Services

    Data Center Service Insertion:

    Appliances directly connectedto the aggregation switches

    Service device type and Routedor Transparent mode can affectphysical cabling and traffic flows.

    Transparent modeASA example:

    Each ASA dependant onone aggregation switch

    Separate links for fault tolerance

    and state traffic either run throughaggregation or directly

    Dual-homed with interface redundancyfeature is an option

    Currently no EtherChannelsupported on ASA

    Direct Services Appliances

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    18/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 18

    Data Center

    Core

    Aggregation

    Access

    Services

    Data Center Service Insertion:

    Dual-homed Catalyst 6500

    Services do not depend on asingle aggregation switch

    Direct link between chassis forfault-tolerance traffic, may alternatively

    trunk these VLANs through Aggregation

    Dedicated integration pointfor multiple data centerservice devices

    Provides slot real estate for

    6500 services modulesFirewall Services Module (FWSM)

    Application Control Engine (ACE) Module

    Other services modules, alsobeneficial for appliances

    External Services Chassis

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    19/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-296119

    Enterprise Network

    VLAN 161

    VLANs171,172

    VLAN 163

    VLAN 170

    Web Server

    Farm

    VLAN 162

    Transparent

    FWSM Context

    Transparent

    ACE Context

    Aggregation

    VDC

    Services

    Sub-Aggregation

    VDC

    Access

    VLAN 180

    Data Center

    Core

    Client-Server

    Flow

    Using Virtualization and ServiceInsertion to Build Logical Topologies

    Logical topology exampleusing services VDC sandwichphysical model

    Layer-2 only services chassis withtransparent service contexts

    VLANs above, below, and between servicemodules are a single IP subnet

    Sub-aggregation VDC is a layer-3 hoprunning HSRP providing defaultgateway to server farm subnets

    Multiple server farm VLANS can be

    served by a single set of VLANsthrough the services modules

    Traffic between server VLANs does notneed to transit services device, but may bedirected through services using virtualization

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    20/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 20

    FT VLANs

    Enterprise Network

    VLAN 161

    VLAN 163

    FT VLAN

    Web/App

    Server Farm

    Transparent

    FWSM Contexts

    Transparent

    ACE Contexts

    VRF VRF

    VRF Instances

    Aggregation VDC

    Services

    Sub-Agg

    VDC

    Access

    VLAN 180

    Data Center Core

    VLAN 153

    VLAN 152

    VRF VRF

    VLAN 181

    FT VLANs

    FT VLAN

    DB Server

    Cluster

    VLAN 151

    Client-Server Flow

    Server to Server Flow

    VLAN 162

    Logical Topology to supportmulti-tier application traffic flow

    Same physical VDC serviceschassis sandwich model

    Addition of multiple virtual contexts to

    the transparent services modulesAddition of VRF routing instanceswithin the sub-aggregation VDC

    Service module contexts and VRFsare linked together by VLANs toform logical traffic paths

    Example Web/App server farmand Database server cluster homedto separate VRFs to direct trafficthrough the services

    Using Virtualization and ServiceInsertion to Build Logical Topologies

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    21/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 21

    Layer 3 Features and Best Practices

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    22/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 22

    Data Center

    Core

    Enterprise Network

    Aggregation

    Access

    Layer 3

    Layer 2

    Layer-3 Feature Configurationin the Data Center

    Summarize IP routes at the DCAggregation or Core to advertise fewerdestinations to the enterprise core

    Avoid IGP peering of aggregation switchesthrough the access layer by setting VLAN

    interfaces as passive Use routing protocol authentication to help

    prevent unintended peering

    If using OSPF, set consistent referencebandwidth at 10,000 or higher for supportof 10 Gigabit Ethernet

    HSRP timers at hello-1 dead-3 provide abalance of fast failover without too muchsensitivity to control plane load

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    23/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 23

    Common

    Subnet/VLAN

    X

    Node 1 Node 2

    Node 3 Node 4

    What happened tohellos from Node 3?

    I will wait for the hold

    timer to expire

    Network A

    Network B

    IGP Hello and Dead/Hold TimersBehavior Over Shared Layer 2 Domain

    Routing protocols insert destinationsinto the routing table and maintainpeer state based on receipt ofcontinuous Hello packets.

    Upon device or link failure, routing

    protocol only removes the failedpeers routes only after Dead/Holdtimer is expired.

    Tuning Dead/Hold timers lowerprovides faster convergenceover this type of topology.

    A firewall module running an IGPis an example of a Data Centerdevice peering over a L2 domain,or any VLAN interface (SVI).

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    24/89 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 24

    Point-to-

    Point Routed

    Links

    My one direct link to

    Node 3 has gonedown. I better

    remove Node 3s

    routes from the table

    immediately

    Node 1 Node 2

    Node 3 Node 4

    XNetwork B

    Network A

    IGP Hello and Dead/Hold TimersBehavior Over Layer-3 Links

    Upon device or link failure, routingprotocol immediately removesroutes from failed peer basedon interface down state.

    Tuning the IGP Hello and

    Dead/Hold timers lower is notrequired for convergence dueto link or device failure.

    Transparent-mode services orusing static routing with HSRP canhelp ensure all failover cases are

    based on point-to-point links.

    Note that static routing with HSRPis not a supported approach for IPmulticast traffic.

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    25/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 25

    Nexus 7000 Dual-Supervisor IGP Helloand Dead/Hold Timer Settings

    Redundant Supervisor modules are designed toprovide high availability first within the chassis,then through the design of the topology.

    Since the data plane is still forwarding,we dont want to re-converge traffic

    during a supervisor failover.

    When using Dual Supervisors, default Helloand Dead/Hold timers permit continuous packetforwarding during supervisor failover withoutneed for re-convergence.

    If using default timers, ensure that all links arepoint-to-point Layer 3 links, and service devicesare in transparent-mode or using static routes.

    If using Single Supervisors, tuned Hello and

    Dead/Hold timers of 1 and 3 seconds may be used.

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    26/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 26

    Layer 2 Features, Enhancements

    and Best Practices

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    27/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 27

    N

    NN

    RN

    Type NetworkN

    R Root GuardDesignated

    Root port

    Alternate

    R

    N

    N

    Aggregation

    Access

    Classic Spanning Tree TopologyLooped Triangle Access

    Layer-2 protocols are designed tobe plug-and-play, and forward trafficwithout configuration

    Stability is enhanced by controllingthe location of the STP root switch,and using consistent topologies

    Looped topologies are required toprovide link redundancy and servermobility across access switches

    Using STP to break the networkloop reduces available bandwidth in aVLAN due to blocked links

    Most STP issues result from undesiredflooding due to link issues or softwareproblems causing loss of BPDUs

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    28/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 28

    These Features Allow STP to Behave with MoreIntelligence, but Require Manual Configuration:

    Rootguard prevents a port from accepting a better pathto root where this information should not be received

    Loopguard restricts the transition of a port to adesignated forwarding role without receivinga BPDU with an inferior path to root

    Port fast (Edge Port) allows STP to skip the listeningand learning stages on ports connected to end hosts

    BPDUguard shuts down a port that receives aBPDU where none should be found, typicallyalso used on ports facing end hosts

    Spanning Tree Configuration Features:Rootguard, Loopguard, Portfast, BPDUguard

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    29/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 29

    N

    NN

    RN R

    N

    N

    Network portN

    R Root GuardDesignated port

    Root port

    Alternate port

    E Edge port

    BPDUs in Rapid-PVST carryinformation about STP port roles

    Only one designated port canexist per segment without

    creating a loop Dispute Mechanism performs

    consistency check validatingpath to root against port role

    Requires no configuration,enabled automatically

    In all versions of NX-OS,available in IOS on the Catalyst6500 beginning 12.2(33) SXI

    Updated STP Features:Dispute Mechanism

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    30/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 30

    Network portN

    R Root GuardDesignated port

    Root port

    Alternate port

    E Edge port

    N

    NN

    RN R

    N

    N

    E E E

    Aggregation

    AccessEdge Ports

    No BPDUs

    Network Ports

    All Send BPDUs

    Updated STP Features:Bridge Assurance

    Specifies transmission of BPDUson all ports of type network.

    Requires configuration, best practiceis to set global default to typenetwork, default is normal

    NX-OS Examples:spanning-tree port type {edge [trunk],network, normal}

    spanning-tree port type network default

    IOS Examples:

    spanning-tree portfast {edge [trunk],network, normal}

    spanning-tree portfast network default

    Protects against unidirectional linksand peer switch software issues

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    31/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 31

    Virtualized Access Layer Design

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    32/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 32

    Virtualized Access Layer Design

    vPC Design Principles

    Extending vPC from access to aggregation

    Nexus 2000 Fabric Extender Physical

    VirtualizationNexus 1000v Embedded Virtual Switching

    Nexus 4000 Unified I/O

    UCS 6100 Integrating Unified Compute

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    33/89

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    34/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 34

    Redundancy

    handled by STP

    Multi Chassis EtherChannel

    (STP logical view)

    Blockedport

    Merges two bridges into one, allowingMulti-Chassis EtherChannels

    Also merges Layer-3 and overall switch management

    Does not rely on STP for redundancy

    Limited to pair of switches

    Designs Not Relying on STP VirtualSwitching System (VSS)

    S

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    35/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 35

    Aggregation

    Access

    Data Center

    Core B

    L

    R

    N

    E

    BPDUguard

    Loopguard

    Rootguard

    Network port

    Edge port

    - Normal port type

    BB

    E

    B

    E

    B

    E

    Layer 3

    Layer 2 (STP + Bridge Assurance)

    Layer 2 (STP + BA + Rootguard)

    Layer 2 (STP + BPDUguard)

    E

    N

    N

    L

    RRN N

    R-

    Root

    Designs Not Relying on STPVSS Design

    D i N t R l i STP Vi t l

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    36/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 36

    VPCdomain

    VPCdomain

    Redundancy

    handled by STP

    Redundancy

    handled by vPC

    STP view of

    vPC

    Blockedport

    Appears as a single EtherChannel to the access layer

    Two independent control planes

    Active/active HSRP, separate Layer-3 and management

    Still no STP blocked ports

    Designs Not Relying on STP VirtualPort Channel (vPC)

    D i N t R l i STP

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    37/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 37

    VPCdomain

    Aggregation

    Access

    Data Center

    Core B

    L

    R

    N

    E

    BPDUguard

    Loopguard

    Rootguard

    Network port

    Edge port

    - Normal port type

    B

    RR

    N N

    N N N N N NRRRRRR

    --

    B

    E

    B

    E

    B

    E

    B

    E E

    N

    N

    N

    L

    Layer 3

    Layer 2 (STP + Bridge Assurance)

    Layer 2 (STP + BA + Rootguard)

    Layer 2 (STP + BPDUguard)

    Backup

    Root

    HSRPSTANDBY

    Root

    HSRPACTIVE

    Designs Not Relying on STPvPC Design

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    38/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 38

    vPC(Virtual Port Channels)

    VSS(Virtual Switching System)

    Multi-Chassis Port Channel Yes Yes

    Loop-free Topology (no blocking ports) Yes Yes

    STP as a fail-safe protocol only Yes Yes

    Control Plane Two Independent Nodes,both active Single Logical Node

    Switch Redundancy (sup failover) Nexus 7000 with dual sups:Intra-chassis

    Inter-Chassis

    Control Plane Protocols Instances per Node Single Instance

    Switch ConfigurationIndependent Configs

    (w/ consistency checker)Single Configuration

    Maximum Physical Nodes 2 2

    Inter-switch Link Hardware10GE interfaces

    Current HardwareSup720-10G, 6708, 6716

    (PFC3C mode)

    vPC Design principlesvPC and VSS Comparison

    PC D i i i l

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    39/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 39

    vPC peer a vPC switch, one of a pair

    vPC member port one of a set of ports(port channels) that form a vPC

    vPC the combined port channel between

    the vPC peers and the downstream device

    vPC peer-link Link used to synchronize

    state between vPC peer devices, must be

    10GbE

    vPC peer-keepalive link the keepalive

    link between vPC peer devices, i.e., backup

    to the vPC peer-link

    vPC VLAN one of the VLANs carried

    over the peer-link and used tocommunicate via vPC with a peer device.

    non-vPC VLAN One of the STP VLANs

    not carried over the peer-link

    CFS Cisco Fabric Services protocol, used

    for state synchronization and configuration

    validation between vPC peer devices

    vPC

    vPC peer

    non-vPC

    device

    vPC peer-keepalive

    link

    vPC

    member

    port

    vPCvPC

    member

    port

    CFS protocol

    vPC peer-link

    vPC Design principlesvPC Terminology (common to Nexus 5000/7000)*

    *vPC is within the co ntext of a VDC (applies on ly to N7k)

    Domain ID

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    40/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 40

    vPC Design principlesvPC Roles

    Primary (but

    may be

    OperationalSecondary)

    Two Nexus running vPC appear as a

    single STP entity

    vPC Role defines which of the twovPC peers processes BPDUs

    Role matters for the behavior w ith

    peer-l ink fai lu res!

    Role is defined under the domainconfiguration

    Lower priority wins if not, lowersystem mac wins

    Role is non-preemptive soOperational Role is w hat matters

    Operational Role may di f ferent from

    the pr ior i t ies con f igured under the

    domain Secondary(but may be

    OperationalPrimary)

    5k01 5k02

    Primary (but

    may be

    Operational

    Secondary)

    Secondary

    (but may be

    Operational

    Primary)

    7k01 7k02

    PC D i i i l

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    41/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 41

    Multi-Layer vPC can join 8 activeports port-channels in a unique 16-

    way port-channel*

    vPC peer side load-balancing isLOCAL to the peer

    Each vPC peer has only 8 activelinks, but the pair has 16 activeload balanced links

    Nexus 7000

    Nexus 5000Nexus 7000

    * Possible with any device supportingvPC/MCEC and 8-way active port-channels

    16-portport-channel

    vPC Design principlesAttaching to a vPC Domain - 16-way Port-Channel (1 of 2)

    PC D i i i l

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    42/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 42

    16 active ports between 8active port-channel devicesand 16 active port-channeldevices?

    vPC peer side load-balancing

    is LOCAL to the peer Each vPC peer has only 8

    active links, but the pair has 16active load balanced links tothe downstream devicesupporting 16 active ports

    D-series N7000 line cards willalso support 16 way activeport-channel load balancing,providing for a potential 32way vPC port channel!

    Nexus 7000

    Nexus 5000

    Nexus 5000 16-port port-channelsupport introduced in 4.1(3)N1(1a)release

    16-portport-channel

    vPC Design principlesAttaching to a vPC Domain - 16-way Port-Channel (2 of 2)

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    43/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 43

    Virtualized Access Layer Design

    vPC Design Principles

    Extending vPC from access to aggregation

    Nexus 2000 Fabric Extender Physical

    VirtualizationNexus 1000v Embedded Virtual Switching

    Nexus 4000 Unified I/O

    UCS 6100 Integrating Unified Compute

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    44/89

    44 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    vPC on the N7k

    vPC on the N5k

    N7k01 N7k02

    N5k01 N5k02

    1 3

    1 2 3 4

    DESIGN 1

    Extending vPC from access to aggregationDouble-sided vPC between N7K and N5K

    Make sure to leverage LACP

    domain-idneeds to differbetween the N7k vPC and the

    N5k vPC Spanning-Tree root is defined

    on one of the 2 N7ks

    N5k priorities are unmodified

    2

    Max 16 Ports

    Extending vPC from access to aggregation

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    45/89

    45 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    Extending vPC from access to aggregationDouble-sided vPC Nexus 7000/5000/2000 A/A

    DESIGN 3

    N7k01 N7k02

    1 2 3 4

    N5k02N5k01

    N2k01 N2k02

    1 3

    N7k01 N7k02

    N5k01 N5k02

    DESIGN 2

    1 2 3 4

    5 6 7 8

    N2k01 N2k02

    1 2 3

    E t di PC f t ti

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    46/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 46

    vPC on the N7k

    N7k01 N7k02

    N5k01 N5k02

    2/1 2/2 2/1 2/2

    Po10

    2/9 2/102/9 2/10

    Po51

    Peer Link

    primary secondary

    root

    regular STP priority

    Extending vPC from access to aggregationMulti-Layer vPC Logic equivalent

    logical equivalent

    Root

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    47/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 47

    Extending vPC from access to aggregationIncorrect Configuration vPC at the Aggregation Layer

    vPC at the Access Layer

    Two Separate vPCs One of the vPCs is blocking

    N5k01 N5k02

    2/1 2/2 2/1 2/2

    Po10

    2/9 2/10 2/9 2/10

    Po51

    Peer Link

    primary secondary

    regular STP priority

    vPC on the N7k

    N7k01 N7k02

    root

    Po51

    logical equivalent

    Root

    Extending vPC from access to aggregation

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    48/89

    48 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    N7k01 N7k02

    N5k01 N5k02

    vPC

    vPC

    up to

    8 links

    up to

    8 links

    Secondary Root

    bridge id 80c2

    Primary Root

    bridge id ac32

    vPCOperational

    secondary

    vPCOperational

    primaryPeer-link

    Peer-link

    RP

    RP

    DP

    RP

    DP

    RP DP

    DP

    vPCoperational secondary

    vPCoperational primary

    root bridge is

    ac32

    Extending vPC from access to aggregationSpanning-Tree

    Extending vPC from access to aggregation

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    49/89

    49 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    Choose between MST and Rapid PVST+

    With MST be aware of the NXOS VLANrange and of the Global Type-1Inconsistencies, hence configure VLAN-to-region mappings from day 1

    Connect the N7ks with redundant peer-linksacross linecards

    Connect the N5ks with redundant peer-links

    Make sure that peer-keepalive connectivityis routed and out-of-band

    Assign roots/secondary roots as usual

    (regardless of primary/secondary roles) Leverage 4.2(1) code on the N7k in order to

    exclude non-vPC SVIs from autostate

    N7k01 N7k02

    1 2 3 4

    5 6 7 8

    N2k01 N2k02

    1 2 3

    Extending vPC from access to aggregationSummary Checklist for vPC Design (1)

    Extending vPC from access to aggregation

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    50/89

    50 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    Create a single Port-channel leveragingLACP between Aggregation and Access

    Ensure domain-id or system-macdiffers between Agg pair and Accesspair

    On the Nexus 5000 layer calculate andcheck VLAN utilization keeping FCoEand VSANs into account

    Trim VLANs that are used for VSANsfrom the uplinks going to a Nexus 7000

    When available leverage LACP forteaming between servers and FEX/5k

    Do not forget that putting a VLAN on avPC requires that that VLAN be on thePeer-link too

    Make sure the configuration is notcausing Type-1 Inconsistencies

    N7k01 N7k02

    1 2 3 4

    N5k02N5k01

    5 6 7 8

    N2k01 N2k02

    1 2 3

    Extending vPC from access to aggregationSummary Checklist for vPC Design (2)

    E t di PC f t ti

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    51/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 51

    Extending vPC from access to aggregationNexus 5000/7000 Scalabi l i t y Numbers

    Release Supported Scalability

    Nexus 7000

    4.2(1)

    System wide 16,384 VLANs (4096 per VDC)

    Fabric Extender support to come in Q2CY10

    256 vPCs (4-port) with the following

    260 VLANs*200 SVI/HSRP Groups

    40k MACs & 40K ARPs10K (S,G) w. 66 OIFs (L3 sources)

    3K (S,G) w. 64 OIFs (L2 sources)

    Nexus 5000

    4.1(3)N1(1)

    System wide 507 VLANs (512 minus number of VSANs)

    12 Fabric Extenders

    16 Hardware Ethernet port channelsUp to 480 Host vPCs (via FEX)

    * NOTE: Supported numbers of VLANs on vPCs are NOT related to an hardware or software limit but reflectwhat has been currently validated by our QA. The BUs are planning to continuously increase these numbers

    as soon as new data-points become available.

    Extending vPC from access to aggregation

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    52/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 52

    Design Benefits

    Large VLAN Scale: 4K VLAN (per VDC) L2/L3 boundary at access layer

    Active-active host connectivity with vPC

    VDC or VRF-lite for customersegmentation

    Flexibility of connectivity options 1/10GE,STP Network Port

    High-density access with Nexus 7010 (evenhigher with 7018).

    Carrier Class availability

    Hardware High availability: dual supervisor,internal redundancy)

    Software High availability: StatefulSwitchovers, non disruptive ISSU andprocess restart.

    Positioning

    Environments with existing structuredcabling (EoR/MoR)

    Blade Servers

    Extending vPC from access to aggregationNexus 7000 End of Row Access-Layer design

    End of Row

    Nexus 7000

    1/10GE

    VM

    VM

    VM

    VM

    VMVM

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    53/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 53

    Virtualized Access Layer Design

    vPC Design Principles

    Extending vPC from access to aggregation

    Nexus 2000 Fabric Extender Physical

    VirtualizationNexus 1000v Embedded Virtual Switching

    Nexus 4000 Unified I/O

    UCS 6100 Integrating Unified Compute

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    54/89

    F b i E t d & PCPeer Keepalive

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    55/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 55

    Fabric Extender & vPCTerminology and Components

    Nexus 2000 Single-homed vPC Nexus 2000 active/active

    (or dual homed)

    Peer-link

    FT link (can be routed)

    2-GigE ports host port channel

    FT link (can be routed)

    vPC

    2 ports

    primary

    mgmt network mgmt network

    FEX120FEX100

    mgmt0 mgmt0

    mgmt0 mgmt0

    vPC 1

    Peer-link

    FEX120FEX100

    secondary

    primary secondary

    HIF HIF

    HIFHIF

    fabric links fabric links

    Peer Keepalive

    Peer Link/ MCT

    vPC Member Port

    5k01 5k02 5k01 5k02

    Server 802.3ad not

    supported in this topology

    LACP is supported

    vPC 2

    F b i E t d

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    56/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 56

    Peer-link

    FT link (can be routed)

    primary

    mgmt network

    FEX101FEX100

    mgmt0 mgmt0

    secondary

    HIFHIF

    Peer Keepalive

    Peer Link/ MCT

    vPC Member Port

    5k01 5k02 With BPDU filtering, FEX still sends out

    10 BPDUs after link up which prevents

    the introduction of an unwanted loop.

    BPDU filtering is used in conjunction with

    BPDU guard.

    BPDU filtering + Guard

    Errdisable

    Fabric ExtenderBPDU filtering + guard

    Fabric E tender

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    57/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 57

    Peer-link

    FT link (can be routed)

    primary

    mgmt network

    FEX101FEX100

    mgmt0 mgmt0

    secondary

    HIFHIF

    Peer Keepalive

    Peer Link/ MCT

    vPC Member Port

    5k01 5k02

    Spanning-Tree Port Type Edge

    or

    Spanning-Tree Port Type Edge Trunk

    Fabric ExtenderPort Type Edge (Portfast or Trunkfast)

    Fabric Extender Mixed Topology

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    58/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 58

    Fabric Extender Mixed TopologyvPC is a per line card (FEX) behaviour

    Management Network

    primary secondary

    mgmt0 mgmt0

    2-GigE ports host port channel

    FEX120FEX100

    FEX101 FEX121

    single attached servers and/or A/S

    5k01 5k02

    Fabric Extender Scaling

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    59/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 59

    Fabric Extender ScalingNexus 2000 Single Homed (aka Straight Through)

    Nexus 2000 Straight-through deployment

    n5k01

    FEX100

    FEX101

    FEX102

    max 12 = 576 ports

    max 4 fabric links

    Typical Redundant Deployment as of 4.0(1a)

    http://www.cisco.com/en/US/partner/products/ps9670/products_installation_and_configuration_guides_list.html

    Active/Standby

    n5k01

    FEX100

    FEX101

    FEX102

    n5k02

    FEX120

    FEX121

    FEX122

    max 12 x 2 = 576 ports x 2

    Fabric Extender Scaling

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    60/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 60

    n5k02n5k01

    Fabric Extender ScalingScalability for Host vPC Nexus 2000 straight-through

    max 24 FEXes = 1152

    max 480 vPCs (each vPC has 2 ports)

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    61/89

    Cisco Nexus 1000V

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    62/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 62

    Cisco Nexus 1000VComponents

    vCentre Server

    Cisco VSMs

    VM#3

    VM#2

    VM#1

    Virtual Ethernet Module(VEM)

    Replaces Vmwares virtual switch

    Enables advanced switchingcapability on the hypervisor

    Provides each VM with dedicatedswitch ports

    VM#6

    VM#5

    VM#4

    VM#9

    VM#8

    VM#7

    Virtual Supervisor Module(VSM)

    CLI interface into the Nexus 1000V

    Leverages NX-OS 4.04a

    Controls multiple VEMs as a singlenetwork device

    Connecting the Virtual Access Layers

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    63/89

    2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 63

    SCVMKVM

    In a Four NIC implementation

    Access switch configured withTrunk ports (no Etherchannel)

    VEM Configured with SRC basedhashing

    N1KV Port Channel 1 (vPC-HM)

    VM Data

    N1KV Port Channel 2 (vPC-HM)

    Service Console, VM Kernel, VEMControl and Packet

    SG1SG0

    SC and VMKtraffic carried on

    one upstreamvPC-HM uplink

    bundle

    Trunk Edge Portsupporting onlythe VM VLANs

    VM trafficcarried on asecond vPC-

    HM uplinkbundle

    SG1SG0

    Connecting the Virtual Access LayersConnecting Nexus 1000V to 2148T without vPC

    Connecting the Virtual Access Layers

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    64/89

    2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 64

    SCVMKVM

    Independentlinks on theupstreamswitches

    Edge Ports

    Traffic loadshared acrossup to 8 linksusing a MAC

    hash

    Connecting the Virtual Access LayersConnecting Nexus 1000V to 2148T

    4.0(4)SV1(1) release of N1KV supportsthe MAC pinning with up to 8 links in theuplink bundle

    Allows for up to 8 Independent portsspread across multiple switches in thephysical access layer

    If a failover occurs, all the traffic pinned toan interface will be migrated to the otherinterfaces.

    Nexus1000(config)#port-profile sys-uplink

    Nexus1000(config-port-prof)#no shut

    Nexus1000(config-port-prof)#capability uplink

    Nexus1000(config-port-prof)#channel-group auto mode mac-pinning

    Nexus1000(config-port-prof)#switchport mode trunk

    Nexus1000(config-port-prof)#switchport trunk allowed vlan 10-25

    Nexus1000(config-port-prof)#state enabled

    Nexus1000(config-port-prof)#vmware port-group

    Connecting the Virtual Access Layers

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    65/89

    2006 Cisco Systems, Inc. All rights reserved. Cisco ConfidentialPresentation_ID 65

    SCVMKVM

    vPC between a pair of N5K/N2K

    allows symmetrical Etherchannels

    One of the benefits of using port-channels for connectivity is thereduction in the amount of flooding /broadcast that the software switch

    has to drop

    17 hashing algorithms available

    Selected either system wide or permodule

    Default is source MAC

    N1KV Port Channel 1

    VM Data

    N1KV Port Channel 2

    Service Console, VM Kernel, VEM Control

    and Packet

    SC and VMKtraffic carried on

    one upstreamuplink bundle

    vPC MCECBundles

    VM trafficcarried on a

    second uplinkbundle

    Connecting the Virtual Access LayersConnecting Nexus 1000V to 2148T

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    66/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 66

    Virtualized Access Layer Design

    vPC Design Principles

    Extending vPC from access to aggregation

    Nexus 2000 Fabric Extender Physical

    VirtualizationNexus 1000v Embedded Virtual Switching

    Nexus 4000 Unified I/O

    UCS 6100 Integrating Unified Compute

    Unified Fabric with FCoE

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    67/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 67

    Nexus 4001I Switch Module

    14 x 10G downlinks & 6 x10G uplink

    ports

    Dual-mode downlink ports (1G / 10G

    to server)

    RJ-45 Management interface

    RS-232 Console port

    Goes in high-speed slots in IBM

    BCH or BCH-T

    Max of Four 4001I per chassis

    Support CX1 SFP+, SR, LSR optics

    Console

    Management Ejector Handles

    Integration with

    Server

    Virtualization

    Consolidate

    Infrastructure w/

    Unified IO

    Scale Bandwidth

    with 10G & Multi-

    pathing

    Unified Fabric with FCoEPhase 2 Adding the Nexus 4000 (Q4CY09)

    10G uplink ports

    Unified Fabric with FCoE

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    68/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 68

    ENode FCoE Switch

    VLAN

    Discovery

    FLOGI/FDI

    SCFLOGI/FDISC

    Accept

    FC

    CommandFC Command

    responses

    FIP:

    FCoE

    Initialization

    Protocol

    FCOE

    Protocol

    VLAN

    Discovery

    FCF

    Discovery

    FCF

    Discovery

    FIP is FCoE Initialization

    Protocol Enables FCoE adapter to

    discover FCoE FCFs on aVLAN and establish aVNVF link with one of theFCFs

    Separate Ethertype makesFIP messages easier tointercept for FIP snoopingby traditional Ethernetbridges as opposed tostandard FCoE login

    frames.

    Building foundation forfuture multi-hop FCoEtopologies

    Unified Fabric with FCoEFIP FCoE Initialization Protocol

    Unified Fabric with FCoE

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    69/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 69

    FCF

    FCF MAC

    0E.FC.00.DD.EE.FF

    SAN

    FIP Capable

    Topology

    FIPSnooping

    Unified Fabric with FCoEFIP Snooping

    ENode

    Spoofed MAC

    0E.FC.00.DD.EE.FF

    ENode MAC

    0E.FC.00.07.08.09

    FIP Snooping Nexus 4000

    Security (Protection from MAC Addressspoofing of FCoE end devices ENode)

    Fibre Channel links are Point-to-Point

    Ethernet bridges can utilize ACLs toprovide the equivalent path control

    (equivalent of point-to-point)

    FIP protocol allows for efficient automaticconfiguration of the ACLs necessary tolock down the forwarding path (FIPSnooping)

    Ethernet-NPV (E-NPV) - Future On the control plane (FIP ethertype), an

    "Ethernet NPV bridge" improves over a"FIP snooping bridge" by intelligentlyproxying FIP functions between a CNAand an FCF

    Data Center Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    70/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 70

    In a Unified I/O configuration (FCoE)

    we have two distinct topologies

    Isolated access switches - SANA and SAN B

    Combined access switchesvPC supporting MCEC

    To ensure correct forwardingbehavior vfc interface can only beassociated with a vPC etherchannel(only one physical interface perswitch)

    Works with Gen-2 FIP enabled CNAsONLY

    While the Port-channel is the sameon 5k1 and 5k2, the FCoE VLANsare different.

    FCoE VLANs are NOT carried on thepeer-link

    FCF

    LAN SAN BSAN A

    Direct Attach Topology

    FCF

    Nexus 5000

    type edge trunk

    Data Center Access ArchitectureFCoE High Availability Design Considerations

    Data Center Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    71/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 71

    FCF

    LAN SAN BSAN A

    Direct Attach Topology

    FCF

    Nexus 5000

    VLAN 10,30

    VLAN 10,20

    VLAN 10 ONLY

    HERE!

    Data Center Access ArchitectureFCoE High Availability Design Considerations

    A VLAN is dedicated for every VSAN

    in the fabric.

    The VLAN is signaled to the hostsover FIP

    The FCoE controller in the host tags

    all subsequent FIP login and FCoEframes with the signaled FCoEVLAN

    This does not require trunking to beenabled at the host driver as tagging

    is performed by the FCoE controlleron the host.

    All ports in the FCoE network wouldhave to be enabled for trunking to beable to carry VLAN tagged frames.

    !VLAN 20 is dedicated to carry

    traffic for VSAN 2

    (config)# vlan 20

    (config-vlan)# fcoe vsan 2

    Data Center Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    72/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 72

    FCF

    LAN SAN BSAN A

    Direct Attach Topology

    FCF

    Nexus 5000 Nexus 5000

    4G FC

    10G Ethernet

    10G Unified I/O

    Data Center Access ArchitectureFCoE High Availability Design Considerations

    Single initiator dual homed via a

    Port-Channel to a single Nexus

    5000

    No ability to isolate SAN A and

    SAN B

    This is an unsupported

    configuration Not consistent with Fibre

    Channel High Availability

    design requirements

    Neither Direct Attach nor Multi-

    Hop Capable CNAs currently

    supported

    Data Center Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    73/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 73

    LAN SAN BSAN A

    FIP Capable Topology

    Multi-Hop Capable

    FCF

    4G FC

    10G Ethernet

    10G Unified I/O

    FIPSnooping

    Data Center Access ArchitectureFCoE High Availability Design Considerations

    Initial vPC is supported as it is

    possible to isolate the SAN A

    and SAN B traffic between the

    CNAs and first hop switches

    No ability to isolate SAN A and

    SAN B between the first and

    second tier of switches This is an unsupported

    configuration

    Not consistent with Fibre

    Channel High Availability

    design requirements

    FcoE frames

    load balancedover the

    EtherchannelNO SAN A

    and SAN B

    isolation

    Data Center Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    74/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 74

    LAN SAN BSAN A

    FIP Capable Topology

    FCF

    4G FC

    10G Ethernet

    10G Unified I/O

    FIP

    Snooping

    Data Center Access ArchitectureFCoE and Nexus 4000 Design Considerations

    N4K-1 N4K-2

    Blade Chassis

    Multi-Hop Capable

    FIP

    Snooping

    FCF

    N5K-2N5K-1

    Nexus 4000 enabled for Multi-

    Hop FCoE leverages

    Uplinks carrying FCoE VLAN

    must only be connected to a

    single upstream N5K (FCF)

    switch

    On Nexus 5000, enable featurefcoe

    On Nexus 4000, enable feature

    fip-snooping

    From an FCoE perspective this

    design required to preserve thedual Fabric environment

    Data Center Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    75/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 75

    Ethernet Only Capable Topology

    10G Ethernet

    Data Center Access ArchitectureNexus 4000 Ethernet Only

    N4K-1 N4K-2

    N5K-2N5K-1

    In Ethernet ONLY topologiesstandard Ethernet HA design

    rules can be utilized

    Dual NIC servers with standard

    NIC teaming supported between

    N4K MCEC enabled uplinks from the

    N4K to the first aggregation tier

    is supported since no

    requirement for LAN A and LAN

    B topologies

    No server side vPC is currently

    supported

    N7K-2N7K-1

    Multi-Hop Unified Fabric Configuration

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    76/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 76

    LAN SAN BSAN A

    FIP Capable Topology

    FCF

    4G FC

    10G Ethernet

    10G Unified I/O

    FIP

    Snooping

    Multi Hop Unified Fabric ConfigurationFCoE Multi-Hop Configuration

    N4K-1 N4K-2

    Blade Chassis

    Multi-Hop Capable

    FIP

    Snooping

    FCF

    N5K-2N5K-1

    Nexus 5000

    Switching mode for FC traffic

    VSAN 1 Mapped to VLAN 30 for FCoE

    VFC interface will bind to MAC Address of

    Blade Server

    Port-Channel downlink to N4K will be mode

    trunk (all vlans will be allowed)

    Nexus 4000 VLAN 30 & 31 in FIP-Snooping mode

    Port-Channel uplink to N5K will be mode

    trunk

    Qlogic CNA in Windows Blade Server

    CNA configured as .1q trunk (for IP and FCoEtraffic)

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    77/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 77

    Virtualized Access Layer Design

    vPC Design Principles

    Extending vPC from access to aggregation

    Nexus 2000 Fabric Extender Physical

    VirtualizationNexus 1000v Embedded Virtual Switching

    Nexus 4000 Unified I/O

    UCS 6100 Integrating Unified Compute

    Evolution of the DC Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    78/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 78

    UCS 6100 End Host Mode

    UCS Fabric Interconnect supports twomodes of operation

    Switch Mode

    End Host Mode

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    79/89

    79 2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961

    Evolution of the DC Access ArchitectureUCS 6100 End Host Mode

    Border interface

    Server interface

    Evolution of the DC Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    80/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 80

    UCS 6100 End Host Mode

    Each server link (vNIC) is pinned to exactly one border link

    Pinning logic load-balances server links to various border links

    Server to server traffic is locally switched

    Server to network traffic goes out on pinned border link

    Network to server unicast traffic is forwarded to server only if it arrives onpinned border link (RPF check)

    Server traffic received on any border link except pinned border link is dropped(Deja-Vu Check)

    RPF Check Deja-Vu Check

    Border Links

    Server Links

    Unicast forwarding learnt traffic

    EH Node

    Evolution of the DC Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    81/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 81

    Evolution of the DC Access ArchitectureUCS 6100 End Host Mode

    Border interface

    Server interface

    (SIF)

    1 2 3 4

    A B C D E Fincorrect

    configuration

    correct

    configuration

    I/O Modules

    incorrect

    configuration

    Evolution of the DC Access Architecture

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    82/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 82

    UCS 6100 End Host Mode

    Border interface

    Server interface

    (SIF)

    Virtual Switching can

    be connected behind

    End Host Virtualizer

    all Border Interfaces of the same subnet

    must be in the same L2 domain

    I/O

    Modules

    Summary

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    83/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 83

    Assembling the New Data Centre Edge

    FCOE

    10 GigE/FCOE

    1G and 10GE RackMount Servers

    1G and 10GE BladeServers

    Pass-ThruHP/IBM/Dell

    10GE Blade(HP)

    N4K - DCB BladeSwitch

    IBM/Dell

    MDSNexus 7000

    VM

    VM

    VM

    VM

    VMVM

    NEXUS

    1000v

    slot 1slot 2slot 3slot 4slot 5

    slot 6slot 7slot 8

    blade1blade2blade3blade4blade5

    blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5

    slot 6slot 7slot 8

    blade1blade2blade3blade4blade5

    blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    UCS ComputePod

    VM

    VM

    VM

    VM

    VMVM

    NEXUS1000v

    VMVMVM

    VM

    VMVM

    NEXUS1000v

    Co

    re

    slot 1slot 2slot 3slot 4slot 5

    slot 6slot 7slot 8

    blade1blade2blade3blade4blade5

    blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5

    slot 6slot 7slot 8

    blade1blade2blade3blade4blade5

    blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    slot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8

    blade1blade2blade3blade4blade5blade6blade7blade8

    UCS ComputePod

    VM

    VM

    VM

    VM

    VMVM

    NEXUS1000v

    Nexus 5000/7000

    Nexus 2000

    AccessLay

    er

    VirtualAccessLa

    yer

    Core/AggregationLayer

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    84/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 84

    Summary

    Virtualization of the network infrastructureimproves utilization and offers new deploymentmodels in the data center

    Flexible service models readily account

    for application requirements Security is a process not a product;

    virtualization allows for efficientapplication of security policies

    The application is the beneficiaryof all these developments

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    85/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 85

    Additional Resources

    Data Center Design Zone:http://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.html

    Data Center DesignIPNetwork Infrastructure

    Data Center Service Patterns

    Implementing Nexus 7000 in

    the Data Center AggregationLayer with Services

    BRKDCT 2961 Recommended Reading

    http://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.htmlhttp://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.htmlhttp://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.htmlhttp://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.htmlhttp://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.htmlhttp://www.cisco.com/en/US/netsol/ns748/networking_solutions_sub_program_home.html
  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    86/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 86

    BRKDCT-2961 Recommended Reading

    Q ti ?

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    87/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 87

    Questions ?

    C l t Y S i E l ti

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    88/89

    2009 Cisco Systems, Inc. All rights reserved. Cisco PublicBRKDCT-2961 88

    Complete Your Session Evaluation

    Please give us your feedback!!

    Complete the evaluation form you weregiven when you entered the room

    This is session BRKDCT-2961

    Dont forget to complete the overallevent evaluation form included inyour registration kit

    YOUR FEEDBACK IS VERYIMPORTANT FOR US!!! THANKS

  • 8/13/2019 5.Evolution of Hierarchical Network Design for the Data Center

    89/89