Upload
others
View
15
Download
0
Embed Size (px)
Citation preview
Demystifying Network andDemystifying Network and I/O Convergence: What You
Need to Know and Why?
Simon GordonSenior Product Line Managerg
Juniper Networks
Ab t tAbstract
• Many customers are currently evaluating options for upgrading their data center p pg gnetwork infrastructures and have a natural desire to deploy the ideal products that provide the lowest cost of ownership and highest return on investment as they start to implementinvestment as they start to implement some level of I/O or network convergenceconvergence.
Ab t tAbstract
• During this session, Simon will explain the standards involved in network and I/O convergence, the purpose each of these standards serve within the network, what standards are in use with products in the market today and the benefits they provide and thebenefits they provide, and the implications of the still evolving standards on investment protection withstandards on investment protection with these products.
Ab t tAbstract
• Attendees will:1. Learn how to apply the various networking1. Learn how to apply the various networking
standards when considering data center network upgrades
2 L h t t k d t f th2. Learn how to take advantage of the benefits these standards and incorporated products provide to optimize their data p p pcenter networks
3. Hear best practices and benefits for successful data center networksuccessful data center network infrastructure upgrades
A dAgenda
• How is the Data Center changing– What are the protocols that applyWhat are the protocols that apply– What operational changes apply
• What can be done to preparep p– Layout the data center for the future
• How best can I benefit todayy– How to take the first steps
• What comes next– Unified fabrics and full network convergence
A dAgenda
• How is the Data Center changing– What are the protocols that applyWhat are the protocols that apply– What operational changes apply
• What can be done to preparep p– Layout the data center for the future
• How best can I benefit todayy– How to take the first steps
• What comes next– Unified fabrics and full network convergence
Wh t St P t lWhat Storage Protocol
• Most of what I will say applies no matter the actual storage protocol in question…g p q– iSCSI– FCoE– NAS– Other
• Some is specific to FCoE– Specific FCoE Deployment models
FC E O ti– FCoE Operation
H i th D t C t h iHow is the Data Center changing
• Old world– Servers had fixed rolesServers had fixed roles– Servers had multiple NICs, HBAs– Datacenter had multiple networks
• New World– Fungible resources– I/O & Network Consolidation– Pooling of resources at much larger scale
Data Center NetworksToday
SAN ASAN ASAN BSAN BSAN ASAN ASAN BSAN B
Storage Area NWHeavily PooledLow Oversubscription
Data Center LANNorth South Client Server trafficLimited East West Server p
Low LatencyDual Rail
Virtualization trafficHigh Oversubscription possible with limited server mobility
DATACENTER DESIGNTHE SERVER EYE VIEW
Remote Client Systems Campus Client SystemsRemote Client Systems
WAN
Campus Client Systems
Campus LAN
File ServersPresentation LayerEmail Svrs
DC
Application Layer
Email Svrs
DCLANs
Database Layer
SAN A/B
DatacenterBackup Servers
Data Center NetworksIn reality…
LAN 1
LAN 4b
LAN 3b
LAN 2
LAN 3a
LAN 4 SAN ASAN ASAN BSAN BLAN 4bLAN 4a SAN ASAN ASAN BSAN B
Storage Area NWHeavily PooledLow Oversubscription
Data Center LANsNorth South Client Server trafficLimited East West Server p
Low LatencyDual Rail
Virtualization trafficHigh Oversubscription possible with limited server mobility
Server ViewServer ViewLoad Balancing & High Availability
AA Nic Team
AP Nic Team
IP Load Balancing
SCSI Load
BalanceAA Nic Team
AP Nic Team
IP Load Balancing
SCSI Load
Balance
C t kCampus networkCluster/Application Network
Backup networkStorage Networkg
Management Network…
Server ViewServer ViewLoad Balancing & High Availability
AA Nic Team
AP Nic Team
IP Load Balancing
SCSI Load
BalanceAA Nic Team
AP Nic Team
IP Load Balancing
SCSI Load
Balance
Different networks setup with different
topologiestopologies
M i t th ldMoving to the new world
• the convergence protocols– Mostly DCB implicationsMostly DCB implications– Some FCoE implications
• Load balancing & ha deploymentsg p y– Across multiple Ethernet services– Ethernet vs. FC/FCoE model
DATA CENTER NETWORKS ETHERNET CONVERGED
DCB based LANDCB based LANSeparation comes from VLANs, 802.1p
user priorities, and DCB functionality
SAN ASAN ASAN BSAN BSAN ASAN ASAN BSAN B
Storage Area NWHeavily PooledLow Oversubscription
Data Center LANCollapsing the multiple Ethernet network segments in to a p
Low LatencyDual Rail
common access 10G layer
1Wi 8 Ch l PFC & ETS1Wire 8 Channels – PFC & ETSTX Queue 0 RX Buffer 0PFPF
CPFPFCC DCBX is j st DCBRX Buffer 0
TX Queue 1RX Buffer 1TX Queue 2RX Buffer 2TX Queue 3RX B ff 3
TX Queue 0RX Buffer 1TX Queue 1RX Buffer 2TX Queue 2RX Buffer 3
ONC
ONPF
F
PFC
OFFPF
ON
PFC
ONPFPFC
ONC
ONPF
F
PFC
OFFPF
ON
PFC
ONPFPFC
STOPSTOP pausepause
Por
t -P
FC Physica
DCBX is just DCB auto negotiation
The standard TX Queue 4RX Buffer 4TX Queue 5RX Buffer 5TX Queue 6RX Buffer 6
RX Buffer 3RX Buffer 4TX Queue 4RX Buffer 5TX Queue 5RX Buffer 6TX Queue 6
TX Queue 3PF
F
PFC
OFFPF
F
PFC
OFFPFPFC
OF
ONON PF
F
PFC
OFFPF
F
PFC
OFFPFPFC
OF
ONON
DROPDROPKeeps sendingKeeps sendingPhy
sica
l P
al Port -P
F
gives flexibility on implementation
RX Buffer 6TX Queue 7RX Buffer 7
TX Queue 6RX Buffer 7TX Queue 7
FFPF
ON
PFC
ON
FFPF
ON
PFC
ON
FC
2 3
TX Queue 0TX Queue 1
TX Queue 2
TX Queue 3
Cl G1
Class Group 1
Class Group 2
11 2 3
26 5
11 2 2
25 5
Por
t -E
TS
TX Queue 4
TX Queue 5
TX Queue 6
TX Queue 7
Class Group 3
Offered traffic Realized trafficT1 T2 T3 T1 T2 T3
24 3
2 3 3
Phy
sica
l P
Server ViewServer ViewLoad Balancing & High Availability
Multiple ServicesTwo NICs
Multiple ServicesTwo NICs
Having collapsed many NICs and Networks each of which may have been configured and operated differently it is necessary to consider the load balancing and high availability model in a single cohesive way for
the new converged network…
EthernetEthernetLoad Balancing & High Availability
StackSwitch
The Physical Switch deployment model as well as the L2/L3 TopologyThe Physical Switch deployment model as well as the L2/L3 Topology will determine your options for NIC teaming, HA, Load Balancing
Server ViewServer ViewLoad Balancing & High Availability
Service 3 HA
Service 1 Non HA
Service 4 Non HA
Service 2HA
TeamedMAC
Ethernet MAC1
Ethernet MAC2
Also need to map each service to the right user priority and VLAN as well as the right MAC -(O/S or Hypervisor enabled for DCB ?)
FC E Cl ifi ti M lti HFCoE Clarifications – Multi Hop
• Classic FC Definition• Cascaded FCoE based FCFsCascaded FCoE based FCFs• Everything is exactly the same as classic
FC except I now have point to point ti th t i i t d fconnections on ethernet wires instead of
on fibre channel wires• VE-VE port ISLs instead of E-E ISLsVE VE port ISLs instead of E E ISLs
FC E Cl ifi ti M lti HFCoE Clarifications – Multi Hop
• Another possible definition• I have pure L2 hops (L2 EthernetI have pure L2 hops (L2 Ethernet
switches) in between my FC aware devicesL2 S it h i b t th d• L2 Switches in between the servers and the FCoE enabled FCF
• L2 Switches in between a group of FCoEL2 Switches in between a group of FCoE enabled FCFs
FC E I li tiFCoE Implications
• In both cases• L2 Visibility whether physical or throughL2 Visibility whether physical or through
VLANS determines what possible virtual FC connections can be establishedFC l h d t di f th• FC layer has no understanding of the underlying Ethernet topology which is perhaps unfortunate and places heavy p p p yrequirements on L2 partitioning
FCoE Load BalancinggVirtual Connection Status
ServerServer ServerE-Node
ServerE-Node
Server E-Node initiate's DiscoveryMultiple FCF E-Nodes Respond – even from same Virtual Fabric
Server E-Node selects one (or more) E-Node (per virtual fabric)
Virtual N-Port to Virtual F-Port connection establishedHOPEFULLY the servers are naturally distributed evenly
FIP Keep Alive monitors the virtual connection
FCFE-Node
FCFE
Single FC FabricE NodeE-
Node
FC E & M lti l F b iFCoE & Multiple Fabrics
ServerServer ServerE-Node
ServerE-Node
Server E-Node initiate's DiscoveryMultiple FCF E-Nodes Respond – even from same Virtual Fabric
Server E-Node selects one (or more) E-Node (per virtual fabric)
Virtual N-Port to Virtual F-Port connection establishedHOPEFULLY the servers are naturally distributed evenly
FIP Keep Alive monitors the virtual connection
FCFE-Node
FCFE
FC Fabric 1FC Fabric 1
E NodeE-Node
FCoEFCoELoad Balancing & High Availability
Logically Dual Rail details depend on VLAN
configuration
Traditional Pure Dual Rail
g
LAN L2 Domain 1 L2 Domain 2LAN L2 Domain 1 L2 Domain 2
SAN A SAN BSAN A SAN B
NOTE: we can largely consider the SAN (from the perspective of the Ethernet network) as just another end point on that network…
Particularly with HW based CNAs we can also largely consider the FCoE HA model separately to the general Ethernet HA model – though
some may be concerned about the risk to classic dual rail SAN
FCoE Logical Connectivity
SERVER
g ySingle Ethernet Dual SAN
FCoEFCoEFCoE FCoE SERVER
Server
FCoE VN
FCoE VN
Server
FCoE VN
FCoE VN
E-NodeE-Node
FCFE-Node
FCFE-Node FCoE
VFFCoE VF
FCoE VF
FCoE VF
Server View Software Based FCoE Stack
FC/SCSILB/HA
FCoE VNFCMA MAC6FCoE VN
FCMA MAC6FCoE VNFCMA MAC6
FCoE VNFPMA MAC4FCoE VN
FPMA MAC4FCoE VNFPMA MAC4
FCoE VNFCMA MAC5FCoE VN
FCMA MAC5FCoE VNFCMA MAC5
TeamedMAC3
FPMA MAC4 FCMA MAC5
Ethernet MAC1
Ethernet MAC2
Server ViewHardware based FCoE Stack
FC/SCSILB/HA
FCoE VNFPMA MAC4
FCoE VNFCMA MAC5
O/S & ServicesFCoE VN
FPMA MAC4FCoE VNFPMA MAC4
FCoE VNFCMA MAC5FCoE VN
FCMA MAC5
TeamedMAC3
FPMA MAC4 FCMA MAC5
Ethernet MAC1
Ethernet MAC2
Oth St ffOther Stuff
• Solving the spanning tree problem– TRILL, 802.1aq, Stacking, Distributed LagTRILL, 802.1aq, Stacking, Distributed Lag
• Options for congestion management– ECN, TCP, QCN, ICMP Source Quench, , Q , Q
• Handling Virtual Servers– VEPA etc– Potential overlap between virtual switches
and storage over Ethernet
A dAgenda
• How is the Data Center changing– What are the protocols that applyWhat are the protocols that apply– What operational changes apply
• What can be done to preparep p– Layout the data center for the future
• How best can I benefit todayy– How to take the first steps
• What comes next– Unified fabrics and full network convergence
Ph i l P tiPhysical Preparations
• Understand the cabling requirements– 10GbE / 40GbE / 100GbE10GbE / 40GbE / 100GbE– Copper connectivity to the server
• Implications for Data Center Layoutp y– Servers / Racks / Rows / Pods– Location of networking equipment
Th h t tThe short story
• 10G First hop most likely copper– Server to torServer to tor– limit in m’s
• Subsequent hop optical 40G+q p p– ToR to Aggregation / Fabric Core– 150m on OM4 for 40G & 100G
• Data Center Layout– PODS of up 32 to 128 Racks– Multiple PODS making up the data center
Data center cabling for 40GbE & 100GbEAggregation/core layer
QSFP/CXP
MTP connector
Q
QSFP/CXPMMF OM3 cable minimum(100m)
OM4 preferred for slightly greater reachStructured cabling (12 pair)
Access layer
Twinax copper cable1, 3, 5, 7 meter
SFP+ optical transceiver300 meter MMF with SR
SFP transceiverCopper (1000Base-T)
40GbE/100GbE ready Access cabling• Use MTP terminated trunk cables between access to aggregation switches • MTP to LC cassettes and LC to LC breakout cables for access layer connection• Patch panel housing in main distribution area• MTP to LC harnesses from patch panel to switches
1U patch panel
MTP-LC casette
Breaks out MTP to LC connector
12-fiber ribbons in bundles of 24, 48, 96 fibers
MTP ribbon cables terminated at the adapter
A dAgenda
• How is the Data Center changing– What are the protocols that applyWhat are the protocols that apply– What operational changes apply
• What can be done to preparep p– Layout the data center for the future
• How best can I benefit todayy– How to take the first steps
• What comes next– Unified fabrics and full network convergence
C I b fit t d ?Can I benefit today ?
• Yes - plan for deploy in phases in 2011– Ethernet I/O ConsolidationEthernet I/O Consolidation
• Understand and deploy DCB
– Flatten/Simplify the Ethernet network• Readiness for convergence and virtualization
– Convergence access layer using FCoE• FCoE Transit SwitchFCoE Transit Switch• FCoE-FC Gateway using NPIV
– Or converge with iSCSI, NAS, etc
Data Center NetworksFCoE Based Convergence
Converged AccessConverged AccessSAN ASAN ASAN BSAN B
Data Center LANCommon access layer – FCoE Transit Switch or FCoE-FC GatewaySeparate Ethernet Aggregation / FC BackboneSeparate Ethernet Aggregation / FC BackboneUplinks to SAN Backbone could be either FC or FCoEWhat about blade servers ?
FCOE TRANSIT SWITCHFCOE-FC GATEWAY
FC-FCoE Switch or Gateway FC SwitchFC-FCoE Switch or GatewayVF_Port VF_Port VF_Port
FC Switch
F_Port F_PortDCBPort
DCBPort
FC E T it S it h NPIV Proxy
N_PortN_PortDCBPort
DCBPort
FCoE Transit SwitchFIP Snooping
NPIV Proxy
FCoE-FC GW
VF_Port VF_Port VF_Port
DCB DCB DCB
FIP ACLs
FIPACLs
FIPACLs
VN_Port VN_Port VN_Port
DCBPort
DCBPort
DCBPort
VN_Port VN_Port VN_Port
DCBPort
DCBPort
DCBPort
FCoE servers with CNA FCoE servers with CNA
SERVER ACCESS LAYER CONVERGENCE OPTIONS
FCoE Transit Switch FCoE-FC GatewayComparatively Low CostJust a DCB Switch with FIP Snooping
Higher cost for FC PersonalitySoftware License and/or Physical Ports
Interesting at ToR
May occur as embedded blade server switch instead of pass through module
Interesting at ToR
Investment protection if can buy as a DCB switch and upgrade later to a gateway
J t DCB FIP S i C t f FC E t FCJust DCB + FIP Snooping Converts from FCoE to FCAlso works as a DCB switch.
Not a FCF – NPIV based proxy
For FC BB 5 cannot provide end to end Requires external FC Fabric can alsoFor FC-BB-5 cannot provide end to end FCoE solution – still required a full FC Fabric somewhere
Requires external FC Fabric, can also support upstream FCoE Transit Switches
Clean management separation as nothing for the SAN team to actively manage
Some overlap of management as gateway is active in the data-path and load balancingfor the SAN team to actively manage active in the data-path and load balancing
FCoE-FC Gateway at ToRyNo Multi-hop FCoE
FCFabric
FCFabric
FCFF_Port
FCoE-FCGateway
GWN_Port
GWVF_Port
CNAVN_Port
EthernetInfrastructure
EthernetInfrastructureServer
CNAVN P tVN_Port
FCFCFCFabric
FCFabric
FCFF_Port
FCoE-FCGateway
GWN_Port
GWVF_Port
FCoE-FC Gateway at ToRyNo Multi-hop FCoE
FCFabric
FCFabric
FCFF_Port
FCoE-FCGateway
GWN_Port
GWVF_Port
Blade Servers
Pass thru
EthernetInfrastructure
EthernetInfrastructureServer
CNAVN_Port
Pass th
CNAVN_Port
thru
FCFCFCFabric
FCFabric
FCFF_Port
FCoE-FCGateway
GWN_Port
GWVF_Port
FCoE-FC Gateway at ToRyLimited Multi-hop FCoE
FCFabric
FCFabric
FCFF_Port
FCoE-FCGateway
GWN_Port
GWVF_Port
Blade Servers
DCBSwitch
EthernetInfrastructure
EthernetInfrastructureServer
CNAVN_Port
DCB S it h
CNAVN_Port
Switch
FCFCFCFabric
FCFabric
FCFF_Port
FCoE-FCGateway
GWN_Port
GWVF_Port
FCoE Transit Switch at ToRLimited Multi-hop FCoE
FCFabric
FCFabric
FCFVF_PortFCoE Transit Switch
CNAVN_Port
EthernetInfrastructure
EthernetInfrastructureServer
CNAVN P tVN_Port
FCFCFCFabric
FCFabric
FCFVF_PortFCoE Transit Switch
FCoE Transit Switch at ToRLimited Multi-hop FCoE
FCFabric
FCFabric
FCFVF_PortFCoE Transit Switch
Blade Servers
Pass thru
EthernetInfrastructure
EthernetInfrastructureServer
CNAVN_Port
Pass th
CNAVN_Port
thru
FCFCFCFabric
FCFabric
FCFVF_PortFCoE Transit Switch
FCoE Transit Switch at ToRMulti-hop FCoE
FCFabric
FCFabric
FCFVF_PortFCoE Transit Switch
Blade ServersFCoETransi
t
EthernetInfrastructure
EthernetInfrastructure
Switch
ServerCNA
VN_Port FCoETransit
CNAVN_Port
TransitSwitch
FCFCFCFabric
FCFabric
FCFVF_PortFCoE Transit Switch
A dAgenda
• How is the Data Center changing– What are the protocols that applyWhat are the protocols that apply– What operational changes apply
• What can be done to preparep p– Layout the data center for the future
• How best can I benefit todayy– How to take the first steps
• What comes next– Unified fabrics and full network convergence
Wh t tWhat comes next
• The FABRIC is coming– Again understand the full implicationsAgain understand the full implications– Its not just about eliminating spanning tree– Implications of fully overlaying the networks
• FCoE is still evolving FC-BB-6– Scale UP– Scale DOWN– Implications of overlay
FC BB 6 (EXPECTED 2012)FC-BB-6 (EXPECTED 2012)
Traditional FC SAN
FCoE: FC-BB-5Converged Access
Th k Y Q ti ?Thank You – Questions ?
• 2011 is the year of…– Deploying 10G at the edgeDeploying 10G at the edge– Getting ready for 40G/100G at the core– Considering Fabric implications– Deploying full convergence at the edge
• 2012– Will it be the year for full convergence ?