Upload
emulex-corporation
View
1.545
Download
3
Tags:
Embed Size (px)
DESCRIPTION
Join us to learn more about the Emulex Connect Architecture: the Next Generation of Virtual I/O Connectivity, and the new XE201 I/O Controller, the industry's first quad-port converged fabric controller unifying Fibre Channel and Ethernet.
Citation preview
1
May 02, 2011
Emulex Connect ArchitectureThe Next Generation of Virtual I/O
2
Welcome
3 Year Road Map of I/O Technology
Emulex Connect Architecture
Introduction of Converged Fabrics Model
Outline New I/O Market Opportunities
Introduction of the XE201 I/O Controller
3
SpeakerTopicTime
Webcast Agenda
0:00 – 0:05 Welcome & Agenda Shaun Walsh, Emulex
0:05 – 0:15 I/O Challenges in a Virtual Data Center Environments
Bob Laliberte, ESG
0:15 – 0:30 Emulex Connect Architecture Jeff Benck, Emulex
0:30 – 0:40 The XE201 – Next Gen I/O Engine Shaun Walsh, Emulex
0:40 – 0:50 Connectivity Options for Tomorrow's Storage Devices
Deirdre Wassell, EMC
0:50 – 1:00 Summary and Q&A All
©2011 Enterprise Strategy Group
Enterprise Strategy Group | Getting to the bigger truth.T M
I/O Challenges in Virtual Data Center Environments
Bob Laliberte, Sr. Analyst, Enterprise Strategy Group
5
Agenda
© 2011 Enterprise Strategy Group
ESG research
Virtualization is driving new I/O requirements
Options for virtualized connectivity still vary
The evolution of network convergence
Technology landscape
More than a Speed Bump…A New I/O Direction
6
Top IT Initiatives
© 2011 Enterprise Strategy Group
Regulatory compliance initiatives
Large-scale desktop / laptop PC refresh
Business continuity/disaster recovery programs
Data center consolidation
Desktop virtualization
Improve data backup and recovery
Major application deployments or upgrades
Information security initiatives
Manage data growth
Increase use of server virtualization
0% 5% 10% 15% 20% 25% 30% 35%
18%
19%
20%
21%
21%
22%
23%
24%
24%
30%
Which of the following would you consider to be your organization’s most important IT priorities over the next 12-18 months? (Percent of respondents, N=611, ten responses accepted)
7
Rapid Growth Over Next 2 Years
© 2010 Enterprise Strategy Group
Less than 10% of servers
10% to 20% of servers
21% to 30% of servers
31% to 40% of servers
41% to 50% of servers
More than 50% of servers
Don’t know0%
5%
10%
15%
20%
25%
30%
35%
40%
12%
21%
25%
15%
11%14%
1%2%
9%11%
17%20%
38%
3%
Of all the potential x86 servers in your organization that can be virtualized, approx-imately what percentage of these systems have actually been virtualized to date? How do you expect this to change over the next 24 months? (Percent of respon-
dents, N=463)
Today 24 months from now
8© 2011 Enterprise Strategy Group
0%
5%
10%
15%
20%
25%
30%
35%
40%
26%
36%
23%
12%
3%
8%
24%
30% 31%
7%
What would you estimate is the average number of virtual machines per phys-ical x86 server in your environment today? How do you expect this to change
over the next 24 months? (Percent of respondents, N=463)
Today 24 months from now
<5 5-10 11-25 >25
VM Density Drives More I/O Throughput
# 1 Impact
40 GbServer Virtualization
Impact on the Network
It has created more network traffic in the
data center (30%)10GbE & 16 Gb FC
9
Storage Technologies Currently Used
© 2011 Enterprise Strategy Group
Don’t know
iSCSI storage area network (SAN)
Direct-attached storage (DAS)
Network-attached storage (NAS)
Fibre Channel storage area network (SAN)
0% 10% 20% 30% 40% 50% 60% 70% 80%
3%
39%
57%
63%
67%
4%
9%
17%
26%
43%
What storage technologies are you currently using to support your organization’s virtual server environment? Which would you consider to be the primary stor-
age technology used to support your virtual server environment? (Percent of re-spondents, N=190)
Primary stor-age technology supporting vir-tual server en-vironment
All storage technologies supporting vir-tual server environment
# 2 Enabler
ESG’s Evolution of Network ConvergenceCu
rren
t Lev
el Dedicated NetworksOrganizations keep LAN, SAN and IP Storage on their own separate networksSeparate management tools and teamsUnique skills and training required
Prog
ress
ing
Leve
l Consolidated NetworksStarts with Ethernet – run LAN and IP storage together Maintain existing investment in FCConsolidate connectivity at the server and storage level“Convergence Ready”
Adva
nced
Lev
el Fully Converged NetworksMerge on to a single fully converged networkRun LAN and all storage over Ethernet -FCoEConverged adaptors, cabling and switches
© 2011 Enterprise Strategy Group
Technology Landscape
© 2011 Enterprise Strategy Group 11
Processor technology is rapidly advancing
Nehalem, Romley later this year
Rapid growth in dynamic environments
Private and public clouds
Network needs to evolve to meet demands
I/O needs to adapt
Convergence
PCIe 3, SR-IOV & MR-IOV
Visibility and management
Getting to the bigger truth.Getting to the bigger truth.
© 2011 Enterprise Strategy Group
TM
Thank YouFor more information, please contactBob Laliberte508.381.5169 | [email protected]
13
Jeff BenckPresident and COO
Emulex Connect Architecture
14
Discrete Data Center
10G Ethernet and FCoE are enabling technologies to support the Virtual Data Center and Cloud Computing.
The Data Center of Future
Virtual Data Center
Cloud Data Center
• Cloud Computing (Private & Public)• On-Demand Provisioning and Scale• Modular Building Blocks “Legos”• Avoid CAPEX & OPEX
Private cloud
• 3 Discrete Networks• Equipment Proliferation• Management Complexities• Expanding OPEX & CAPEX
• Converged Networks• Virtualized• Simplifies I/O Management• Reduces CAPEX & OPEX
15
New Drivers of the Emulex Connect Architecture
Storage
Universe
• 7ZB by 2013
• Mobile and VDI
• Device-Centric
o
Virtual
Networking
• VM I/O Density
• Scalable vDevices
• End-to-End vI/O
Cloud
Connectivity
• New I/O Models
• I/O Isolation
• New Server Models
Network
Convergence
• Multi-Fabric I/O
• Evolutionary Steps
• RoCEE Low Latency
16
Evolving Network Models
Target ConnectHost Connect
DiscreteNetworks
ConvergedFabric
Networks
ConvergedNetworks
17
Emulex Connect Architecture (ECA)
Virtual Network Services and Management
Connectivity for Cloud & Virtual Data Centers
Scalable Performance and Virtual Devices
Emulex Enterprise Class Reliability
Flexible Multi-Fabric Protocol Engines
18
Emulex Connect I/O Roadmap
Ethernet HighPerformanceComputing
Unified Storage
2010 2011 2012 2013
Multi-Fabric Technology
Low Latency RoCEERDMA
Value AddedI/O Services
I/O Management
Networked Server/ Power Management
16Gb
Converged Networking
UniversalLOMs
10Gb
10GBaseT10Gb 40Gb 100Gb
3rd Gen BMC
32Gb
PCIe Gen3
SR-IOV, Multichannel
8Gb Fibre Channel
40Gb10GBaseT
4th Gen BMC
Contents Under Embargo until 9AM PST, May 2, 2011
19
Emulex Connect Architecture
EmulexAPIs
SLI-4Common Drivers
ASICConnectivity BridgeCards
ModularLOM
Adapter
TargetConnect
I/O Connectivity
Management
Driver Services
I/O Services
Virtual I/O
Blade/Mezz
IPKVM
Ethernet FC/FCoE BridgingiSCSI
SR-IOVUniversal
Multi-Channel
OEM VirtualNetworks
HiperSwitchVirtual
Devices
EncryptionData
IntegrityKey
ManagersI/O
IsolationI/O
Contention
OpenStandards
HypervisorFrameworks
(vCenter)
OneCommandManager
OSFrameworks
OEMFrameworks
CloudOS
Infrastructure NetworkAppliance
Host/RackConnect
NetworkSwitching
CloudContainer
StorageSystems
20
The Flexibility of Converged Fabric Adapter
Dual Port 8Gb FC
2x8
Quad Port 10Gb CNA
Dual Port 16Gb FC
Single Port 40Gb CNA
Dual 8Gb FCDual 10Gb CNA
Dual Port 10Gb CNA
Single 16Gb FCDual 10Gb CNA
Quad Port 8Gb FC
4x82x162x104x102x8 2x101x16 2x101x40
21
ECA – Proof Points
InteropMay 9, 2011
EMC WorldMay 9, 2011
InteropMay 9, 2011
First 16Gb HBAFibre Channel Demonstration
First UCNA10GbE Base-T Demonstration
First UCNA40GbE
Demonstration
22
The Evolution of Virtual & Cloud I/O
Discrete, Multi-Fabric and Converged
Cloud Scale Performance
Emulex Connect ArchitectureProtects and Connects
New Generation of Virtual I/O Services
Emulex Enterprise Class Reliability
Multiple Steps to Convergence End Game
23
24
Shaun WalshVP, Corporate Marketing
Emulex XE201 The Next Generation I/O Engine
25
The XE201 – Multi-Fabric I/O Engine
High Performance
Multi-FabricCombo
Networking
Industry Leading Data Integrity
VirtualI/O
Engine
PCIe 3.0
AdvancedEnergy
Instrumentation
The Only I/O Engine with 8/16FC, 10/40GbE and Quad Ports
26
Key XE201 Technologies
vPath™ HiperSwtich VEPA Multi-VM Pathing
I/O isolation
Cloud and Co-Location Management
RoCEE - Low Latency RDMA
vScaleTM OneCommandvPathTM vEngineTM GreenStateTM
vPath – Virtual I/O Management
Virtual I/O Compliance
27
Key XE201 Technologies
vScale – Workload Performance Manager
vScale™ Highest Bandwidth Storage Protocol
2x FC I/O bandwidth2x Ethernet I/O bandwidth
Workload-Based Performance and Scalability
Dynamic Resources Pooling
8 I/O Cores for True Multi-Fabric
vScaleTM OneCommandvPathTM vEngineTM GreenStateTM
Virtual I/O Compliance
Contents Under Embargo until 9AM PST, May 2, 2011
28
Key XE201 Technologies
vEngine – Protocol Offload
vEngine™ Protocol offload Engine
Reduce CPU overhead by up to 25%
Enables 20% more VMs per CPU core
vScaleTM OneCommand™vPathTM vEngineTM GreenStateTM
Virtual I/O Compliance
Contents Under Embargo until 9AM PST, May 2, 2011
29
Key XE201 Technologies
Universal Mutli-Channel
Virtual I/OSupport for 8Xs the VM ratios
8X virtual functions
SR-IOV, VN-TAG
VEPA and VEB
vScaleTM OneCommandvPathTM
Virtual I/OvEngineTM GreenStateTMCompliance
Contents Under Embargo until 9AM PST, May 2, 2011
30
Key XE201 Technologies
Compliance
ComplianceBlockGuard ™ - T10-PI standard end-to-end data integrity eliminates silent data corruption
vScaleTM OneCommandvPathTM vEngineTM GreenStateTM
Virtual I/O Compliance
31
Key XE201 Technologies
GreenState
GreenState™4Xs bandwidth/watt power
Proactive Power Instrumentation & Energy Reduction
Port Power Provisioning to save on Optic Consumption
vScaleTM OneCommandvPathTM vEngineTM GreenStateTM
Virtual I/O Compliance
Contents Under Embargo until 9AM PST, May 2, 2011
32
Key XE201 Technologies
OneCommand Framework
OneCommand™2 x functionality, 1/2 the time
Unified Drivers & APIs
OC Vision
OC Guardian
OC Key Manager
VMware vCenter
Pay-As-You-Go
vScaleTM OneCommandvPathTM vEngineTM GreenStateTMComplianceVirtual I/OContents Under Embargo until 9AM PST, May 2, 2011
33
34© Copyright 2011 EMC Corporation. All rights reserved.
Connectivity Options for Tomorrow’s Storage Devices
Deirdre WassellDirector, Solutions Marketing EMC2 Corporation
35© Copyright 2011 EMC Corporation. All rights reserved.
Connectivity Options for StorageFibre Channel and Ethernet Options
10 Gigabit Ethernet
LAN, NAS & iSCSI
Fibre Channel
SAN
36© Copyright 2011 EMC Corporation. All rights reserved.
Connectivity Options for StorageFibre Channel over Ethernet (FCoE)
10 Gigabit Ethernet
LAN, NAS & iSCSI
Fibre Channel
SANFCoE
Common Infrastructure and ManagementInvestment Protection
37© Copyright 2011 EMC Corporation. All rights reserved.
Cloud Computing and Big Data Top IT Initiatives
Connectivity matters
16 Gigabit Fibre Channel 10Gigabit Ethernet
Manage data growth
Customers facing massive increases in data Data sets increasing – Big Data
Increase server virtualization
First step to Cloud Computing Virtualization effects bandwidth requirements
38© Copyright 2011 EMC Corporation. All rights reserved.
16Gb/s Fibre Channel Market
• First 16Gb/s Demos
• Industry PlugFests
• Switches
• Host Bus Adapters
• Romley Servers
• Arrays • Peak 16Gb/s shipments
Market Adoption Timeline
Drivers• Server virtualization• Increasing server
workloads• Applications growth• 12-Core processors• SSDs• PCIe 3.0
Applications• High-end backup/DR• High-end databases• Fabric tiering• Private clouds
Benefits• Higher performance• Reduced number of
links• Easier cable
management• Superior power
efficiency
Q2, 2011 Q3, 2011 Q4, 2011 2012/2013 2014
39© Copyright 2011 EMC Corporation. All rights reserved.
1980 1990 2000 2010
Technology Roadmap
16 Gb FC
Ethernet73 83 93
10 GbE02 09
D S W
S W
iSCSI02 08
S W
00
D
Fibre Channel85 94 03
D S WFCoE
09
S
07
D
2011
100 GbE
40 GbE
??
W
10
S
08
D
??
W
1Gb1996
2Gb2001
4Gb2005
8Gb2008
Widespread
Standard
DefinedD
S
W
Market Availability
32 Gb FC
??
W
11
S
09
D
16Gb2011
40© Copyright 2011 EMC Corporation. All rights reserved.
Market Adoption10Gigabit Ethernet, Fibre Channel and FCoE
Early Majority Late Majority
Adopters
Laggards
Chasm
Early
41© Copyright 2011 EMC Corporation. All rights reserved.
1 Direct-attached storage Servers with unused storage, uncontrolled
growth Storage dedicated to one server Decentralized backup
2 Fibre Channel SANs Eliminates islands of storage Increases utilization and availability Highest performance levels
6 Fibre Channel over Ethernet Converges LAN and SAN traffic on single link Lowers operational costs (cabling and
converged network adapters) Scalability for virtual environments
4 iSCSI/NAS Consolidates small or isolated servers Offers low-cost server attachment NAS is ideal for files and unstructured
data
FC-IP/iFCP Connects geographically dispersed
SANs Low cost and easy to deploy for disaster
recovery solutions
3
5 Infiniband Low latency, high bandwidth Ideal for high-performance computing
(HPC)
Today’s Storage Networking Technologies
Networkedstorage
Fibre ChannelSAN
2
Mainframe
Servers
VMware
1
Rack-mounted
servers with CNAs
6Fibre Channelover Ethernet
4 iSCSI/NAS
Remote/isolatedservers
5Infiniband
Server network HPC
3FC-IP/iFCP
Disaster recovery site
42
Summary and Q & A
43
What is The News Today?
The Emulex Connect Architecture defines the next generation of I/O engines, solutions, services and management for the cloud and virtual data centers
Emulex is the first company to sample CFA (Converged Fabric Adapter) technology to OEMs that combines FC and network convergence on a single platform
Emulex is the only company with converged multi-fabric technology capable of 8/16FC, 10/40GbE and concurrent FC & Ethernet running simultaneously on the same card
Emulex is providing an evolutionary, pay-as-you-go model for discrete, converged multi-fabric and converged networks
Emulex will publically demonstrate the first 16Gb FC HBA, 10G Base-T UCNA and 40GbE UCNA at EMC World and Interop on May 9th
44
Q & A
45