40
FCoE vs. iSCSI Making the Choice Stephen Foskett [email protected] @SFoskett FoskettServices.c om Blog.Fosketts.net GestaltIT.com

"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Embed Size (px)

DESCRIPTION

The notion that Fibre Channel is for data centers and iSCSI is for SMB’s and workgroups is outdated. Increases in LAN speeds and the coming of lossless Ethernet position iSCSI as a good fit for the data center. Whether your organization adopts FC or iSCSI depends on many factors like current product set, future application demands, organizational skill-set and budget. In this session we will discuss the different conditions where FC or IsCSI are the right fit, why you should use one and when to kick either to the curb.

Citation preview

Page 1: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

FCoE vs. iSCSIMaking the ChoiceStephen [email protected]@SFoskett FoskettServices.co

mBlog.Fosketts.net

GestaltIT.com

Page 2: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

This is Not a Rah-Rah Session

Page 3: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

This is Not a Cage Match

Page 4: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

First, let’s talk about convergence and Ethernet

Page 5: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Converging on Convergence

•Data centers rely more on standard ingredients

•What will connect these systems together?

•IP and Ethernet are logical choices Modern Data

Center

IP and Ethernet networks

Intel-compatible server hardware

Open Systems (Window

s and UNIX)

Page 6: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Drivers of Convergence

• Demanding greater network and storage I/O• The “I/O blender”• Mobility and abstraction

Virtualization

• Need to reduce port count, combining LAN and SAN

• Network abstraction features

Consolidation

• Data-driven applications need massive I/O• Virtualization and VDI

Performance

Page 7: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

What's in it for you?Server

Managers

• More flexibility and mobility

• Better support for virtual servers and blades

• Increased overall performance

Network Managers

• Wider sphere of influence (Ethernet everywhere)

• More tools to control traffic

• More to learn

• New headaches from storage protocols

Storage Managers

• Fewer esoteric storage protocols

• New esoteric network protocols

• Less responsibility for I/O

• Increased focus on data management

Page 8: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

The Storage Network Roadmap

1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 20190

5

10

15

20

25

30

35

Network Performance Timeline

FCP

Gig

abit

s per

Second

1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 20190

5

10

15

20

25

30

35

40

Network Performance Timeline

FCPEthernet LAN

Gig

abit

s per

Second

1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 20190

5

10

15

20

25

30

35

40

Network Performance Timeline

FCPEthernet LANiSCSIFCoE

Gig

abit

s per

Second

1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 20190

10

20

30

40

50

60

70

80

90

100

Network Performance Timeline

FCPEthernet LANiSCSIFCoEEthernet Backplane

Gig

abit

s per

Second

Page 9: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Performance: Throughput vs. Latency

High-Throughput Low Latency

Page 10: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Serious Performance•10 GbE is faster than most storage

interconnects• iSCSI and FCoE both can perform at wire-

rate

1 GbESATA-300SATA-600

4G FCFCoESAS 28G FC

4x SDR IB10 GbE

4x QDR IB

0 500 1000 1500 2000 2500 3000 3500

Full-Duplex Throughput (MB/s)

Page 11: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Latency is Critical Too•Latency is even more critical in shared

storage

1 GbESATA-300SATA-600

4G FCFCoESAS 28G FC

4x SDR IB10 GbE

4x QDR IB

0 k 100 k 200 k 300 k 400 k 500 k 600 k 700 k 800 k

4K IOPS

Page 12: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Benefits Beyond Speed•10 GbE takes performance off the table

(for now…)•But performance is only half the story:

▫Simplified connectivity▫New network architecture▫Virtual machine mobility

Page 13: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Server Connectivity

1 GbE Network 1 GbE Cluster

4G FC Storage

10 GbE

(Plus 6 Gbps extra

capacity)

Page 14: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Flexibility•No more rats-nest of cables•Servers become interchangeable units

▫Swappable▫Brought on line quickly▫Few cable connections

•Less concern about availability of I/O slots, cards and ports

•CPU, memory, chipset are deciding factor, not HBA or network adapter

Page 15: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Changing Data Center•Placement and cabling of SAN switches

and adapters dictates where to install servers

•Considerations for placing SAN-attached servers:▫Cable types and lengths▫Switch location▫Logical SAN layout

•Applies to both FC and GbE iSCSI SANs•Unified 10 GbE network allows the same

data and storage networking in any rack position

Page 16: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Virtualization: Performance and Flexibility•Performance and

flexibility benefits are amplified with virtual servers

•10 GbE acceleration of storage performance, especially latency – “the I/O blender”

•Can allow performance-sensitive applications to use virtual servers

Page 17: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Virtual Machine Mobility•Moving virtual machines is the next big

challenge•Physical servers are difficult to move

around and between data centers•Pent-up desire to move virtual machines

from host to host and even to different physical locations

Page 18: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Enhanced 10 Gb Ethernet• Ethernet and SCSI were not made for each other

• SCSI expects a lossless transport with guaranteed delivery• Ethernet expects higher-level protocols to take care of

issues• “Data Center Bridging” is a project to create lossless Ethernet

• AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)

• iSCSI and NFS are happy with or without DCB• DCB is a work in progress

• FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)• QCN (Qau) is still not ready

Priority Flow Control (PFC)

802.1Qbb Congestion Management (QCN)

802.1Qau

Bandwidth Management (ETS)

802.1Qaz

Data Center Bridging Exchange Protocol

(DCBX)Traffic Classes 802.1p/Q

PAUSE802.3x

Page 19: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Flow Control• PAUSE (802.3x)

▫ Reactive not proactive (like FC credit approach)

▫ Allows a receiver to block incoming traffic in a point-to-point Ethernet link

• Priority Flow Control 802.1Qbb)▫ Uses an 8-bit mask in

PAUSE to specify 802.1p priorities

▫ Blocks a class of traffic, not an entire link

▫ Ratified and shipping

Switch A Switch B

Graphic courtesy of EMC

• Result of PFC:▫ Handles transient spikes▫ Makes Ethernet lossless▫ Required for FCoE

Page 20: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Bandwidth Management• Enhanced Transmission Selection (ETS) 802.1Qaz

▫ Latest in a series of attempts at Quality of Service (QoS)▫ Allows “overflow” to better-utilize bandwidth

• Data Center Bridging Exchange (DCBX) protocol▫ Allows devices to determine mutual capabilities▫ Required for ETS, useful for others

• Ratified and shipping

Offered Traffic

10 GE Link Realized Traffic Utilization

3G/s HPC Traffic3G/s

2G/s

3G/sStorage Traffic3G/s

3G/s

LAN Traffic4G/s

5G/s3G/s

3G/s 3G/s

3G/s 3G/s 3G/s

2G/s

3G/s 4G/s 6G/s

Graphic courtesy of EMC

Page 21: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Congestion Notification• Need a more proactive

approach to persistent congestion

• QCN 802.1Qau▫ Notifies edge ports of

congestion, allowing traffic to flow more smoothly

▫ Improves end-to-end network latency (important for storage)

▫ Should also improve overall throughput

• Not quite ready

Graphic courtesy of Broadcom

Page 22: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Now we can talk aboutiSCSI vs. FCoE

Page 23: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Why Choose a Protocol?

Performance

Compatibility

Strategy Cost

Page 24: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Copyright 2006, GlassHouse Technologies

SAN History: SCSI•Early storage protocols were system-

dependent and short distance▫Microcomputers used internal ST-506 disks▫Mainframes used external bus-and-tag storage

•SCSI allowed systems to use external disks▫Block protocol, one-to-many communication▫External enclosures, RAID▫Replaced ST-506 and ESDI in UNIX systems▫SAS dominates in servers; PCs use IDE (SATA)

Page 25: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

The Many Faces of SCSI

Physical

Datalink

Network

Transport

Session

Presentation SCSI-3 Command Set

SCSI

SCSI

SPI

SPI

Parallel SCSI

SSP

SAS Port

SAS Link

SAS PHY

iSCSI

TCP

IP

Ethernet MAC

Ethernet PHY

FC-4/FC-3

FC-2

FC-2

FC-1

FC-0

FCoE Ethernet

MAC

Ethernet PHY

“SCSI” SAS iSCSI “FC” FCoE

Page 26: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Comparing ProtocolsiSCSI FCoE FCP

DCB Ethernet Optional Required N/A

Routable Yes No Optional

Hosts Servers and Clients

Server-Only Server Only

Initiators Software and Hardware

Software* and Hardware

Hardware

Guaranteed Delivery

Yes (TCP) Optional Optional

Flow Control Optional (Rate-Based)

Rate-Based Credit-Based

Inception 2003 2009 1997

Fabric Management

Ethernet Tools FC Tools FC Tools

Page 27: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Why Go iSCSI?• iSCSI targets are robust and mature

▫Just about every storage vendor offers iSCSI arrays

▫Software targets abound, too (Nexenta, Microsoft, StarWind)

•Client-side iSCSI is strong as well▫Wide variety of iSCSI adapters/HBAs▫Software initiators for UNIX, Windows,

VMware, Mac•Smooth transition from 1- to 10-gigabit

Ethernet▫Plug it in and it works, no extensions required▫iSCSI over DCB is rapidly appearing

Page 28: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

iSCSI Support MatrixCertified? Initiator Multi-Path Clustering

Windows Yes HBA/SW MPIO, MCS Yes

Sun Yes HBA/SW Trunking, MPIO Yes

HP Yes SW PV Links ?

IBM Yes SW Trunking ?

RedHat Yes HBA/SW Trunking, MPIO Yes

SUSE Yes HBA/SW Trunking, MPIO Yes

ESX Yes SW Trunking Yes

Page 29: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

iSCSI Reality Check

• Widely Available

• Inexpensive• Functionality• Performance

• FC Estate• 1 GbE

Page 30: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

The Three-Fold Path of Fibre Channel

End-to-End Fibre

Channel• The

traditional approach – no Ethernet

• Currently at 8 Gb

• Widespread, proven

FC Core and FCoE Edge

• Common “FCoE” approach

• Combines 10 GbE with 4- or 8-Gb FC

• Functional• Leverages

FC install base

End-to-End FCoE

• Extremely rare currently

• All-10 GbE• Should

gain traction

• Requires lots of new hardware

Page 31: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

FCoE Spotters’ Guide

Fibre Channel Ethernet Fibre Channel over Ethernet

(FCoE)

Fibre Channel over Ethernet (FCoE)

Bandwidth Management (ETS)

802.1Qaz

Priority Flow Control (PFC)

802.1Qbb

FC-BB-5

Congestion Management (QCN)

802.1Qau

Page 32: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Why Go FCoE?•Large FC install base/investment

▫Storage arrays and switches▫Management tools and skills

•Allows for incremental adoption▫FCoE as an edge protocol promises to

reduce connectivity costs▫End-to-end FCoE would be implemented

later•I/O consolidation and virtualization

capabilities▫Many DCB technologies map to the needs

of server virtualization architectures•Also leverages Ethernet infrastructure

and skills

Page 33: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Who’s Pushing FCoE and Why?•Cisco wants to move to an all-Ethernet future•Brocade sees it as a way to knock off Cisco in

the Ethernet market•Qlogic, Emulex, and Broadcom see it as a

differentiator to push silicon• Intel wants to drive CPU upgrades•NetApp thinks their unified storage will win

as native FCoE targets•EMC and HDS want to extend their

dominance of high-end FC storage•HP, IBM, and Oracle don’t care about FC

anyway

Page 34: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

FCoE Reality Check

• All the cool kids are doing it!

• Might be cheaper or faster than FC

• Leverages FC Investment

• Unproven and expensive

• End-to-end FCoE is nonexistent

• 8 Gb FC is here• Continued

bickering over protocols

Page 35: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Comparing Protocol Efficiency

Source: Ujjwal Rajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,

+FCoE,+and+FC

Page 36: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Comparing Protocol Throughput

Source: Ujjwal Rajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,

+FCoE,+and+FC

Page 37: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Comparing Protocol CPU Efficiency

Source: Ujjwal Rajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,

+FCoE,+and+FC

Page 38: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Conclusion

iSCSI

• Will continue to grow

• Easy, cheap, widely supported

• Grow to 10 Gb seamlessly

FCoE

• Likely but not guaranteed

• Relentlessly promoted by major vendors

• Many areas still not ready for prime-time

Page 39: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Counterpoint: Why Ethernet?•Why converge on Ethernet at all?

▫Lots of work just to make Ethernet perform unnatural acts!

•Why not InfiniBand?▫Converged I/O already works▫Excellent performance and scalability▫Wide hardware availability and support▫Kinda pricey; another new network

•Why not something else entirely?▫Token Ring would have been great!

Page 40: "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

Thank You!

Stephen [email protected]/sfoskett+1(508)451-9532

FoskettServices.comblog.fosketts.net

GestaltIT.comTechFieldDay.com

40