Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Cisco Public 1© 2010 Cisco and/or its affiliates. All rights reserved.
Unified Fabric with FCoEDesign, operation and management best practices
Jaromír Pilař
Consulting Systems Engineer
Cisco Expo 2011
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 2
• Unified Fabric in Data Center
Basic principles and technology enablers
• Fiber Channel over Ethernet
Encapsulation, FCoE and DCB
Standardization
• Unified Fabric product portfolio
Nexus 7000, Nexus 5000/5500 and Nexus 2232
MDS 9500
Converged Network Adapters
• Unified Fabric Deployment
Single-hop and Multi-hop scenarios
• Conclusions
Cisco Public 3© 2010 Cisco and/or its affiliates. All rights reserved.
Unified FabricBasic principles and technology enablers
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 4
• IT organizations operate multiple parallel networks
IP and other LAN protocols over an Ethernet network
SAN over a Fibre Channel network
HPC/IPC over an InfiniBand network
• Unified Fabric supports all three types of traffic onto a single network
• Servers have a common interface adapter that supports all three types of traffic
IPC: Inter-Process Communication
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 5
Processor
Memory
LA
N
Sto
rag
e
IPC
Processor
Memory
IPC: Inter-Process Communication
I/O Subsystem
LA
N
Sto
rag
e
IPC
• Single network instead of three
I/O Subsystem
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 6
FC TrafficFC HBA
• Fewer CNAs (Converged Network Adapters) instead of NICs, HBAs, and HCAs
• Limited number of interfaces for Blade Servers
• Standardized and reduced cabling
All Traffic
Goes over
10 GE
CNA
CNA
FC TrafficFC HBA
NIC Enet Traffic
NIC Enet Traffic
NIC Enet Traffic
HCA IPC Traffic
IPC TrafficHCA
Cisco Public 7© 2010 Cisco and/or its affiliates. All rights reserved.
Fiber Channel over Ethernet
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 8
• From a Fibre Channel standpoint it‟s
FC connectivity over a new type of cable called… an Ethernet cloud
• From an Ethernet standpoints it‟s
Yet another ULP (Upper Layer Protocol) to be transported, but… a challenging one!
• And technically…
FCoE is an extension of Fibre Channel onto a Lossless
Ethernet fabric
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 9
Eth
ern
et
He
ad
er
FC
oE
He
ad
er
FC
He
ad
er
FC Payload
CR
C
EO
F
FC
S
Same as a physical FC frame
Control information: version, ordered sets (SOF, EOF)
Normal ethernet frame, ethertype = FCoE
• 10Gbps Ethernet
• Lossless Ethernet
Matches the lossless behavior guaranteed in FC by B2B credits
• Ethernet jumbo frames
Max FC frame payload = 2112 bytes
Total max frame size = 2180 bytes
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 10
• Mapping of FC Frames over Ethernet
• Enables FC to Run on a Lossless Ethernet
FCoE
Fibre
Channel
Traffic
Ethernet
Eth
ern
et
Head
er
FC
oE
Head
er
FC
Head
er
FC Payload CR
C
EO
F
FC
S
FCoE is standardized by the
same organization that develops
the Fibre Channel standard
Standardized via FC-BB-5
June 2009
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 11
• Mapping of FC Frames over Ethernet
• Enables FC to Run on a Lossless Ethernet
• Priority Flow Control IEEE 802.1Qbb creates lossless Ethernet with classes of service
• Bandwidth Management IEEE 802.1Qaz allows flexible bandwidth sharing for LAN and SAN
• Data Center Bridging Exchange Protocol IEEE 802.1Qaz standardized device to device communication on resources
FCoE IEEE DCB
Fibre
Channel
Traffic
Ethernet
Eth
ern
et
Head
er
FC
oE
Head
er
FC
Head
er
FC Payload CR
C
EO
F
FC
S
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 12
Inv Dev Appr Pub
Technically Stable
Inv Dev Appr Pub
Inv Dev Appr
Inv Dev Appr
Technically stable in October, 2008
Completed in June 2009
Published in May, 2010
Completed in May 2010, forwarded for publication in April 2011
Completed in November 2010, forwarded for publication in April 2011
Completed in November 2010, forwarded for publication in April 2011
Pub
Pub
PFC
ETS
DCBX
FC-BB-5
DCB
Cisco Public 13© 2010 Cisco and/or its affiliates. All rights reserved.
The two protocols have:
• Two different Ethertypes (FCoE 0x8906, FIP 0x8914)
• Two different frame formats
• Both are defined in FC-BB-5
FCoE itself
• Is the data plane protocol
• It is used to carry most of the FC frames and all the SCSI traffic
• Uses Fabric Assigned MAC address (dynamic)
FIP (FCoE Initialization Protocol)
• It is the control plane protocol
• It is used to discover the FC entities connected to an Ethernet cloud
• It is also used to login to and logout from the FC fabric
• Uses unique BIA on CNA for MAC
http://www.cisco.biz/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-560403.html
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 14
FCoE Is Fibre Channel at the Host and Switch Level
Same Operational Model
Same Techniques ofTraffic Management
Same Managementand Security Models
Easy to Understand
Completely Based
on the FC Model
Same Host-to-Switch and
Switch-to-Switch Behavior
of FC
e.g., in Order Delivery or
FSPF Load Balancing
WWNs, FC-IDs, Hard/Soft
Zoning, DNS, RSCN
Cisco Public 15© 2010 Cisco and/or its affiliates. All rights reserved.
• FCF (Fibre Channel Forwarder) is the Fibre Channel switching element inside an FCoE switch
Fibre Channel logins (FLOGIs) happens at the FCF
Consumes a Domain ID
• FCoE encap/decap happens within the FCF
Forwarding based on FC information
Eth
port
Eth
port
Eth
port
Eth
port
Eth
port
Eth
port
Eth
port
Eth
port
Ethernet Bridge
FC
port
FC
port
FC
port
FC
port
FCF
FCoE SwitchFC Domain ID : 15
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 16
VE_Port
VF_Port
VF_Port
VE_Port
VN_Port
VN_Port
FibreChannel over Ethernet Switch
E_NPV
SwitchVF_Port VNP_PortFCF
Switch
End
Node
End
Node
FCoE Switch : FCF
**Available NOW
**Available NOW **Planned
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 17
FC
E-Ports
w/ FC
E
E
It’s Fibre ChannelSame FC CLI Available on the Ethernet Switch
FCF and Virtual Expansion Port (VE-port)
Fibre Channel Forwarders (FCF)
Allows switching of FCoE frames
across multiple hops
Creates Standards based FCoE ISL
Necessary for Multi-Hop FCoE
Nothing Further Required
(No TRILL, vPC or Spanning Tree)
FCF supports all FC Functionalities:
Support up to 7 hops
Support up to 10,000 logins per fabric
Supports up to 8,000 Zones per switch
Supports up to 500 Zonesets per switch
FCoE
VE-Ports
w/ FCoE
VE
VE
FCF
FCF
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 18
B
FCoE
Dedicated (FCoE) Link
An Ethernet ISL carrying
only FCoE related traffic
Converged Link
An Ethernet ISL carrying
both LAN and FCoE traffic
LAN
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 19
Why support dedicated ISLs as oppose to Converged?
Agg BW: 40G
FCoE: 20G
Ethernet: 20G
One wire for all traffic types
ETS: QoS output feature
guarantees minimum bandwidth
allocation
No Clear Port ownership
Desirable for DCI Connections
Dedicated wire for a traffic type
No Extra output feature
processing
Distinct Port ownership
Complete Storage Traffic
Separation
Different methods, Producing the same aggregate bandwidth
Dedicated Links provide better management of Storage Traffic
Available on Nexus 5x00
Nexus 7000 Supported at
Delhi FCS
Available on Nexus 5x00
Nexus 7000 Support Under
ConsiderationHA: 4 Links AvailableHA: 2 Links Available
Cisco Public 20© 2010 Cisco and/or its affiliates. All rights reserved.
Nexus 7000FCoE Enabled Modular DC switch
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 21
• M1 family – L2/L3/L4 with large forwarding tables and rich feature set
• F1 family – Low-cost L2 with high performance, low latency, low power and streamlined feature set
N7K-M108X2-12L
N7K-M148GT-11
N7K-M148GS-11/N7K-M148GS-11L
N7K-F132XP-15
N7K-M132XP-12
FCoE support
planned for
Q2CY2011
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 22
Dedicated Storage VDC – Converged
Interfaces
• Model for host/target interfaces, not ISLs
• Separate VDC running ONLY storage related
protocols
• Ingress Ethernet traffic is split based on frame
ether type
• FCoE traffic is processed in the context of the
Storage VDC
Storage
VDC
LAN
VDC
Converged
I/OVDCs offer Fault Isolation for Higher
Availability
ACCESS
SANLAN
LAN
VDC
Storage
VDC
Nexus 7000
Nexus 7000
LAN
VDC
Storage
VDC
AGGREGATION
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 23
• Converging LAN+SAN does not reduce # of
links
Core links are usually fully utilized
Total # of ISLs will be the same
• Dedicated FCoE links are easier to manage
Clear SAN A/B separation
Simple bandwidth management
LAN
VE
VE
Cisco Public 24© 2010 Cisco and/or its affiliates. All rights reserved.
Cisco MDS 9500FCoE Enabled SAN director
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 25
• Enables integration of existing FC infrastructure into Unified Fabric
8 FCoE ports at 10GE full rate in MDS 9506, 9509, 9513
80-Gbps front panel bandwidth
• Standard Support
T11 FCoE
Pre-Standard DCBX, PFC, ETS
• Connectivity – FCoE Only, No LAN
VE to Nexus 5000, Nexus 7000, MDS 9500
VF to FCoE Targets
• Optics Support
At FCS: 10GE SFP+ SR/LR, Active CX-1 (7/10m), EMC Active Cable (1/3/7/10m)
8-Port 10G FCoE Module
Availability
planned for
Q2CY2011
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 26
Unified Fabric & FCIP
FC Arrays
MDS 9000
FCOE Array
FC Hosts
FCOE Array
Nexus
2000
Nexus
5000
FCoE
Nexus
7000
FCoE HostsUCS
Nexus 7000
IP
Network MDS 9000FCIP FCIP
Ethernet
Fibre Channel
Unified I/O
FCoE
Cisco Public 27© 2010 Cisco and/or its affiliates. All rights reserved.
Nexus 5000 and 2000Building blocks for next generation access layer
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 28
Industry’s First I/O Consolidation Virtualization Fabric for Enterprise Data Center
Nexus 5000SwitchFamily Nexus 5020 - 56-Port L2 Switch
• 40 Ports 10GE/FCoE/DCB, fixed
• 2 Expansion Modules
Nexus 5010 - 28-Port L2 Switch• 20 Ports 10GE/FCoE/DCB, fixed
• 1 Expansion Module
FC + Ethernet • 4 Ports 10GbE/FCoE/DCB
• 4 Ports 1/2/4G FC
Fibre Channel • 8 Ports 1/2/4G FC
ExpansionModules Ethernet
• 6 Ports 10GE/FCoE/DCB
OS
Cisco Fabric Manager and Cisco Data Center Network Manager
Cisco NX-OS
Mgmt
PartnersSW FCoE/DCB + 2x10GE2x10GE/DCB/FCoE 2x10GE
Fibre Channel • 6 Ports 2/4/8G FC
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 29
Evolution of Nexus 5000 family to provide increased functionality and scalability
Nexus 5500SwitchFamily Nexus 5596UP
• 48 fixed unified ports
• L3 capable (modules)
• 3 Expansion Modules
Nexus 5548P and Nexus 5548UP• 32 fixed 1/10GE/FCoE/DCB or unified ports
• L3 capable (daughter card)
• 1 Expansion Module
OS
Cisco Fabric Manager and Cisco Data Center Network Manager
Cisco NX-OS
Mgmt
FC + Ethernet • 8 Ports 1/10GE
• 8 Ports 1/2/4/8G FC
10GE/FCOE/DCB• 16 ports
• 1/10 GE
ExpansionModules Unified ports
• 16 Ports
• 1/10 GE, 1/2/4/8G FC
PartnersSW FCoE/DCB + 2x10GE2x10GE/DCB/FCoE 2x10GE
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 30
Benefits / Use-cases
Deploy Nexus 5500UP as a data center switch standard capable of all important I/O
Mix FC SAN to host as well as switch and target with FCoE SAN
Implement with native Fibre Channel today, enables smooth migration to FCoE in the future
One port for all types of server IO
Flexibility of use enables one standard
chassis for all data center I/O needs
Features
Any Nexus 5500UP port can be configured as 1/10GE, DCB (lossless Ethernet), FCoE on 10GE (dedicated or converged link) or 8/4/2/1G native Fibre Channel
Unified
FabricUltimate Flexibility for Server Access Connectivity
Fibre
Channel
Traffic
Ethernet
or
Fibre ChannelEthernet
or
Ethernet FCoE FC
Cisco Public 31© 2010 Cisco and/or its affiliates. All rights reserved.
• 10 GE Fabric Extender
32x 1/10GE host interfaces; 8x 10GE on network interfaces
10GE interfaces support FCoE
• Can mix-and-match with Nexus 2148T, Nexus 2224TP and Nexus 2248 TP in network topologies
• Port-channel support on host and network interfaces
• ACL classification
• SPAN source/destination support
32 10GE/FCoE SFP+ Downlinks 8 10GE/FCoE SFP+ Uplinks
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 32
DCNM WEB CLIENT
& DASHBOARD
DCNM-SANDESKTOP
CLIENT
DCNM-LAN
DESKTOP
CLIENT
DCNM
SERVER
DB
• Single Data Center pane of glass
• Collaborative management
• Common operations
Nexus Tasks Tools
LAN
Admin
FCoE VDC provisioning*
VLAN management
Ethernet config (L2, network security, VPC, QoS,
etc.
DCB Configuration (VL, PFC, ETS Templates)
DCNM-
LAN
SAN
Admin
Discovery of FCoE VDCs*
VLAN-VSAN mapping (use reserved pool) Wizard
vFC provisioning Wizard
Zoning
DCNM-
SAN
* Applies only to Nexus7000
Cisco Public 33© 2010 Cisco and/or its affiliates. All rights reserved.
Converged Network AdaptersI/O consolidation in the host
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 34
PCIe
Eth
ern
et
10
Gb
E
10
Gb
E
Link
PCIe
Fib
reC
han
ne
lH
BA
HB
A
Link
Fibre Channel Drivers
Ethernet
Drivers
Operating System
Ethernet
Drivers
Operating System
Fibre Channel
Drivers
LAN
PCIe
Fib
reC
han
ne
l
Eth
ern
et
10
GD
CB
10
GD
CB
Link
CNAHBA
Cisco Public 35© 2010 Cisco and/or its affiliates. All rights reserved.
• Standard drivers
• Same management
• Operating system sees:
Dual-port, 10 Gigabit Ethernet adapter
Dual-port, 4 Gbps FibreChannel HBAs
Cisco Public 36© 2010 Cisco and/or its affiliates. All rights reserved.
• PCI Express Gen1 x8
• Dual port 10GbE
Passive copper
Optical SR
• Multi-chip solution
• Full height, full length
• Power = 27W
• QLogic 4Gb FC controller and drivers
• Intel Ethernet controller and drivers
• Windows, Linux, & Vmware (ESX 3.5U4 & 4.0) support
Cisco Public 37© 2010 Cisco and/or its affiliates. All rights reserved.
• PCI Express x8 slot
• Single and dual port 10GbE
Active & passive copper
Optical SR & LR
Operates with QLogic optics only
• Fully Integrated ASIC
• FIP support
• Power ~7.4W (Dual Port with Optical SR)
No heat sink required
• Low Profile Form Factor
Cisco Public 38© 2010 Cisco and/or its affiliates. All rights reserved.
Unified Fabric DeploymentSingle and Multiple Hop Scenarios
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 39
?
?
?
?
???
??
?
??
Switch Switch
Switch
?
T2
I5
I4I3I2
I1
I0
T1T0
Switch Switch
Switch
DNS FSPF
ZoneRSCN DNS
FSPFZone
RSCN
DNS
Zone
FSPF
RSCN
• Ethernet/IP
Bandwidth and services are separate layers, offered by separate entities
• Fibre Channel
Bandwidth and services are collapsed, offered by the fabric
• Unified Fabric design has to incorporate the super-set of requirements
QoS – Lossless „and’ Lossfull Fabrics
High Availability – Highly redundant network topology ‘and’ redundant fabrics
Bandwidth – FC fan-in and oversubscription ratios ‘and’ Ethernet/IP oversubscription
Security – FC controls (zoning, port security, …) ‘and’ IP controls (CISF, ACL, …)
Manageability and visibility – Hop by hop visibility for FC ‘and’ Ethernet/IP
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 40
• Where is it beneficial to use converged link / dedicated link / unified technology within the Ethernet network?
At the edge of the fabric the volume of end nodes allows for a greater degree of sharing for LAN and SAN
Under-utilized links are prevalent at the access layer (especially with 10G) where combining multiple traffic types on a unified wire makes sense
Is there business case in the aggregation/core of the network to justify running Unified Wires?
• LAN and SAN HA models are very different (and not fully compatible) – so which one wins in the event of a conflict??
• FC and FCoE are prone to HOLB in the network and therefore we are limited in the physical topologies we can build
• Targets are attached to the SAN core/Storage Edge of the SAN, but where do we attach targets in an FCoE network? Into the Agg or Core layer? Or is an Ethernet “storage edge” required??
• Where is it more beneficial to deploy two cores – SAN and LAN over a “unified core” topology
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 41
Logical IsolationPhysical Isolation
Separate VLANs and VSANs are used to create multiple fabrics on the same devices
Separate Physical Networks are used for each fabric
Isolation at the…
Switch Level
VDC Level
Wire Level
Somewhere In Between
L3
L2
FC
Ethernet
A B
iSCSI FC FC
L2
L3
Core
Aggregation
Access
Virtual Port-
Channel (VPC)
Cisco Public 42© 2010 Cisco and/or its affiliates. All rights reserved.
• Servers and FCoE targets are directly connected to the Nexus 5000 over 10Gig FCoE
• Nexus 5000 operates as the FCF
• Native Ethernet LAN network and Native Fibre Channel network break off at the Nexus 5000 access layer
• vPC for converged link is currently not supported with Nexus 7000 (planned for future release)
Direct Attached Topology
Enhanced Ethernet and FCoE
Ethernet LAN
Native Fibre Channel
SAN A SAN B
FIP enabled CNAs
vPC
FIP or Pre-FIP enabled CNAs
FCoE Targets
Ethernet/LAN
Nexus 5000FCF
Nexus 5000FCF
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 43
• 20 linerate 10GE ports, 14 downlink and 6 uplink interfaces
• Downlink interfaces can operate at 1G or 10G
• Non-Blocking Architecture
• Low Latency (1.5 usec)
• Support for FCoE (FIP snooping) and IEEE DCB
• Unified Fabric requires fewer I/O Modules in blade enclosures
• High Speed Slots (4) for IBM BladeCenter H Chassis
Cisco Public 44© 2010 Cisco and/or its affiliates. All rights reserved.
• Blade servers connect to Nexus 4000 over 10Gig FCoE
Nexus 4000 is a FIP-Snooping Bridge
• Nexus 4000 connects to Nexus 5000 over 10Gig FCoE
Nexus 5000 operates as the FCF
• Native Ethernet LAN network and Native Fibre Channel network break off at the Nexus 5000
Enhanced Ethernet and FCoE
Ethernet LAN
Native Fibre Channel
SAN A SAN B
Nexus 5000FCF
FCoE Targets
Blade Chassis
Nexus 4000: FIP Snooping Bridge
CNA mezzanine cards
Nexus 5000FCF
Ethernet/LAN
Cisco Public 45© 2010 Cisco and/or its affiliates. All rights reserved.
• Servers connect to Nexus 2232 over 10Gig FCoE
Server connections to the Nexus 2232 can be Active/Standy or over a vPC
• Nexus 2232 is single homed to upstream Nexus 5000
FEX 2232 can be connected with individual links or a port-channel
Maximum distance between Nexus 5000 and Nexus 2232 is 300 m
Enhanced Ethernet and FCoE
Ethernet LAN
Native Fibre Channel
Nexus 5000FCF
Nexus 5000FCF
vPC
Nexus 2232 Nexus 2232
Ethernet/LAN CoreSAN A SAN B
FIP enabled CNAs
Cisco Public 46© 2010 Cisco and/or its affiliates. All rights reserved.
With NX-OS 5.0(2)N2(1), VE_Portsare supported on/between the Nexus 5000 and Nexus 5500
Distance supported is up to 3 km
VE_Ports are run between switches acting as Fibre Channel Forwarders (FCFs)
VE_Ports are bound to the underlying 10G infrastructure
VE_Ports can be bound to a single 10GE port
VE_Ports can be bound to a port-channel interface consisting of multiple 10GE links
Enhanced Ethernet and FCoE
Ethernet LAN
Native Fibre Channel
Nexus 5000FCF
Nexus 5000FCF
vPC
Ethernet/LAN CoreSAN A SAN B
FIP enabled CNAs
Nexus 5000FCF
Nexus 5000FCF
VN
VF
VE
VE
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 47
Servers, FCoE attached Storage
• Multi-hop edge/core/edge topology
• Core SAN switches supporting FCoE
• N7K with DCB/FCoE line cards
• MDS with FCoE line cards (Sup2A)
• Edge FC switches supporting either
• N5K - E-NPV with FCoE uplinks to the FCoE enabled core (VNP to VF)
• N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE)
• Scaling of the fabric (FLOGI, …) will most likely drive the selection of which mode to deploy
N7K or MDS FCoE enabled Fabric
Switches
FC Attached Storage
Servers
VE
Edge FCFSwitch Mode
VE
Edge Switch in E-NPV
Mode
VF
VNPVE
VE
Nexus 7000 FCoE
support, MDS FCoE
module and E-NPV
planned for Q2CY2011
Cisco Public 48© 2010 Cisco and/or its affiliates. All rights reserved.
Conclusions
© 2010 Cisco and/or its affiliates. All rights reserved. Cisco Public 49
• FCoE integrates with today‟s Fibre Channel SANs
• FCoE enables “Unified Technology”Enables LAN and SAN traffic to share wires/devices/adapters for access layer TCO benefits
• FCoE is based on EthernetLeverages Ethernet technology, investment, market presence, scaling capability
• FCoE invites more user choiceAligns vendors from storage + network markets (e.g. volume NIC suppliers)
Benefit is more choice, better assurance of technology supply, price
• FCoE enables FC to become more accessible FCoE going on motherboards = less cost and complexity vs. FC NICs
O/S vendors will adopt with native FCoE stacks – less cost and complexity
Cisco Public 50© 2010 Cisco and/or its affiliates. All rights reserved.
Q & A
Thank you.