Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Jake Howering
Director, Product Management
Solution and Technical Leadership Keys
2
The Market, Position and Message
Extreme Integration and Technology
Market Opportunity for Converged Infrastructure
2017 $6B
$74B
*Source - Wikibon
The Converged Infrastructure market is predicted to reach (US) $74 billion in 2017*, a 52% CAGR that includes Networking, Storage, and Compute.
2013
3
2010-2014
CAGR
NAS + iSCSI + FCoE 13.9%
Fibre Channel SAN 1.3%
Network-attached NAS 5.4%
iSCSI SAN 18.2%
External DAS -8.8%
Fibre Channel over Ethernet
104.6%
Switched SAS 31.9%
Data Center connectivity is changing
• Increasing emphasis on Ethernet-based connectivity options
Source: IDC, (7/10) and EMC
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
2008 2009 2010 2011 2012 2013 2014
Revenue (
$M
)
Storage Networking Interconnections:
Fibre Channel vs Ethernet
5
-
5,000,000
10,000,000
15,000,000
20,000,000
25,000,000
30,000,000
12 13 14 15 16
FC v Ethernet Storage Port Count Growth
2/4G 8G 16G 10GE 40GE
VSPEX Certification & Best-of-Breed Solutions
EMC VNX 5300
Extreme Networks
Summit X670
Lenovo RD630 +
Qlogic 8300 CNA
Mixed Workloads on
VMware ESXi 5.1
Designed for flexibility and validated to
ensure interoperability and fast
deployment, VSPEX enables you to
choose the technology in your complete
cloud infrastructure solution.
http://www.emc.com/platform/vspex-proven-infrastructure Ethernet SAN, up to 125 VM’s, Failure
scenarios, 9.88 GB iSCSI Throughput
Test Configuration
6
http://www.emc.com/platform/vspex-proven-infrastructurehttp://www.emc.com/platform/vspex-proven-infrastructurehttp://www.emc.com/platform/vspex-proven-infrastructurehttp://www.emc.com/platform/vspex-proven-infrastructurehttp://www.emc.com/platform/vspex-proven-infrastructure
Extreme Competitive – Beating the Competition
7
Extreme
Networks X670
Cisco Nexus
5548UP
Brocade VDX
6730
Switch Height 1 RU 1 RU 1RU
OS Single OS Multiple OS’s Single OS
Max 10GE ports 64 48 60
Max 40GE ports 4 0 0
Throughput 1.2T 960G 1.2T
Stacking Yes No Yes
OpenFlow Yes No Yes
OpenStack Yes Yes No
List Price ~ $25,000 ~ $55,000 ~ $62,000
Technology iSCSI FCoE FCoE
High Performance and High Value
Extreme Innovation with Open Standards Extreme Validated Solution (EVS) Enables Storage Partners and SDN
8
NetApp
EMC
Others VLANs
LAG
iSCSI (Support TCP)
Jumbo Frames (9000)
DCB
Extreme SDN for Converged Infrastructure – Available Now ! OpenStack - Extreme Quantum Plug-in is ‘Topology Aware’
Storage
Compute
Network
Pod 1
Zone 1
Data Center Core
Internet
Storage
Compute
Network
Pod 2
VM provisioned in Pod 1
based on Topology
Scheduler proximity
algorithm
VM mobility, aka vMotion,
can be restricted to Pods
or Zones.
9
What Does it Mean to be Extreme
Value – Leverage the low cost curve of Ethernet
– Converge LAN and SAN onto single network for lower CAPEX and OPEX costs
– Efficient scalability with pay-as-you-grow model for incremental growth
Performance – High availability to maximize uptime and user experience
– Efficient bandwidth utilization while assuring a loop free topology
– Data Center Bridging features for lossless SAN experience
Simplicity – Pre-tested and Pre-validated solution assures seamless deployment and operations
– Extreme Networks single OS provides consistent and predictable UI and troubleshooting
– Automation and Management tools with VMware vCenter, EMC Unisphere and Extreme Networks Ridgeline systems
Open Standards – Industry standards protocols, including Ethernet, to assure interoperability
– SDN-Ready with OpenStack and OpenFlow support
– Open API's, including XML and SOAP, for system abstraction and custom integration as needed
10
Solution and Technical Leadership Keys
11
The Market, Position and Message
Extreme Integration and Technology
Storage Networking - Multiple Protocols
iSCSI NAS
Choice of connectivity Fibre Channel (4 Gb/s, 8 Gb/s)
Low cost IP (1 Gb/s, 10 Gb/s)
FCoE
Choice of delivery File-based
Block-based
Growth paths iSCSI to Fibre Channel for
throughput
Fibre Channel to FCoE for simplification
Scale front end and storage independently
Ethernet
File Sharing Ethernet
iSCSI SAN
Fibre Channel
SAN
Fibre Channel
S I M P L E
FCoE
SAN
FCoE
Storage System
Typical Storage Systems Deployment
• Simple – Tune SQL Server in 80% less
time with FAST VP
–Provision SharePoint 4 times faster with a single tool
• Efficient –Realize 50:1 server
consolidation without creating storage bottle necks with FAST Cache
• Powerful –Run virtualized Microsoft SQL
and Oracle three-times faster
Shared storage for virtual servers and applications
Virtual server pool
Oracle, Microsoft Exchange,
SQL Server, SharePoint
Unisphere
vCenter
Storage
pool
VNX series
Storage Networking Key Requirements
14
• Availability
• Resiliency
• Isolation
• Performance
Storage Networking Key Technologies
15
• Fibre Channel
• Fibre Channel over Ethernet
• iSCSI
Ethernet
IP
Network Stack Comparison
TCP
iSCSI
FCIP
FCoE
FCP
FC
IP
TCP
FCP FCP
Ethernet Ethernet
SCSI
FC FC
SCSI SCSI SCSI SCSI
SCSI iSCSI FCIP FCoE FC
PHYSICAL WIRE
Less Overhead
than FCIP, iSCSI
Ethernet-Based Storage Systems
Priority Based
Flow Control
(PFC)
Enhanced
Transmission
Standard(ETS)
DCB Capabilities
Exchange (DCBX)
Data Center Bridging (DCB) Protocols
Block Based Storage
iSCSI
FCoE
File Based Storage
NFS
CIFS
Dynamic
Scripting
Clear
Flow
ExtremeXOS Infrastructure Layer
Storage Networking Key Features
18
• DCB • FIP Snooping • STP • MLAG • VLANs • Jumbo Frames
Data Center Bridging
Data Center Bridging – Key Technology for Lossless Ethernet SAN
DCBX: Data Center Bridging Capabilities Exchange (802.1Qaz)
Discover and Exchange Capabilities and Configuration between DCB Switches via LLDP (802.1ab), including:
Priority Flow Control (802.1Qbb) – Pause specific classes of traffic between DCB switches
Enhanced Transmission Selection (802.1Qaz) – Guarantee a specific percentage of bandwidth for a specific class of traffic
DCBX DCBX
LLDP LLDP
20
FCoE Initialization Protocol – FIP Snooping
FIP Snooping (FCoE Initialazation Protocol
Snooping )
Efficient FC transport (FCoE) over
10GE Ethernet in DC
.. FIP snooping is used in
multi-hop FCoE
environments. FIP snooping
is a frame inspection method
that can be used by FIP
snooping capable DCB
devices to monitor FIP frames
and apply policies based on
the information in those
frames.
LAN & SAN – Physically Separate Topologies
• Servers connect to LAN, NAS and iSCSI SAN with NICs
• Servers connect to FC SAN with HBAs
• Many environments today are still 1 Gigabit Ethernet
• Multiple server adapters, multiple cables, power and cooling costs – Storage is a separate network
(including iSCSI)
Rack-mounted servers
Ethernet
Fibre Channel
Ethernet LAN
1 Gigabit Ethernet
1 Gigabit Ethernet NICs
Storage
Fibre Channel SAN
Fibre Channel HBAs
1 Gigabit Ethernet
iSCSI SAN
Adapter Evolution: Consolidation Network Adapter
Vswitch VMkernel
storage stack
Storage Drivers and Virtualization
NIC NIC FC HBA
FC
HBA
vNIC vNIC vSCSI vSCSI
LAN traffic FC traffic
CNA
CNA
LAN traffic FCoE follows FC
path
Hypervisor
iSCSI traffic iSCSI traffic
*iSCSI initiator can also be in the VM
FCoE Extends FC on a Single Network
Network
Driver FC
Driver
Converged
Network Adapter
Server sees storage traffic as FC
FC network
FC storage
Ethernet
Network
Converged
Network Switch
Ethernet
FC
FCoE SW Stack
Standard
10G NIC
Lossless Ethernet Links 2
options
SAN sees host as FC
FCoE With External FCoE Gateway
Converged Network Switches move out of the rack from a tightly controlled environment into a unified network
Maintains existing LAN and SAN management
Rack Mounted
Servers
10 GbE CNAs
Converged Network Switch
FC Attach
Ethernet Network
(IP, FCoE) and CNS
Ethernet LAN
Storage
Fibre Channel SAN
Ethernet
FC
FCoE with Top of Rack Gateway
Network Switches stay in the rack for a IP-based unified network
Need Specialized Network Switch that has both FC and Ethernet ports – expensive !
Maintains existing LAN and SAN management
Rack Mounted
Servers
10 GbE CNAs
Eternet Switch
FC Attach
Ethernet LAN
Storage
FC SAN
Ethernet
FC
Ethernet LAN & iSCSI SAN
Network Switches stay in the rack for a IP-based unified network
Maintains existing LAN and SAN management
Rack Mounted
Servers
10 GbE CNAs
Eternet Switch
iSCSI Attach
Ethernet LAN
Storage
iSCSI SAN
Ethernet
FC
Convergence at 10 Gigabit Ethernet
• Two paths to a Converged Network – iSCSI purely Ethernet
– FCoE allows for mix of FC and Ethernet (or all Ethernet)
• FC that you have today or buy tomorrow will plug into this in the future
• Choose based on scalability, management, and skill set
Rack Mounted
Servers
10 GbE CNAs
Converged Network Switch
FC SAN
Ethernet LAN
iSCSI/FCoE
Storage
Ethernet
FC
Software Defined Storage Networking – FCoE Overlay
Ethernet Fabric
A
Ethernet Fabric
B
Compute Layer
1. Dual paths from customer to compute layer - Basic Design
2. Active path backup path - Basic Design
3. Load Shared Paths - Advanced Design
Indicates traffic is allowed
to cross planes in normal
working condition
Customer or
Machine
Basic Topology – Customer to Compute Layer
Ethernet Fabric
A
Ethernet Fabric
B
1. Dual paths from Compute layer to storage layer - Basic Design
2. Active path backup path - Basic Design based on hypervisor multi-pathing 3. Load Shared Paths - Advanced Design - requires hypervisor plugin to enable IO level
load sharing
4. Consider TCP monitor LACP lag groups with iSCSI
Failure
plan Path to Fabric A is
Active
Path to Fabric B is
Passive
Storage Layer
Compute Layer
Basic Topology – Compute to Storage
FC + FCoE Design – Single Hop & Active/Standby
FC + FCoE Design – Single Hop & Active/Active
FC + FCoE Design – Scalable Single Hop & Active/Active
Storage Networking Comparison
FC FCoE iSCSI
Lossless Yes DCB (req) DCB (option)
Layer 2 N/A Yes No
Layer 3 - IP No No Yes
TCP No No Yes
Resiliency Yes Yes Yes
Isolation Yes Yes Yes
Performance Best Second Third
Bandwidth 16GB FC 10GE 40GE +
Hardware FC SAN
Director
FCF
(Gateway)
+ FIP
Switch
Ethernet
Switch
Capability
Thank You!
Extreme Converged Infrastructure
• http://www.extremenetworks.com/solutions/datacenter_converged_infrastructure.aspx
Network Design Guide Coming Out Soon !
http://www.extremenetworks.com/solutions/datacenter_converged_infrastructure.aspxhttp://www.extremenetworks.com/solutions/datacenter_converged_infrastructure.aspx
Thank You
EMC VSPEX Minimum Requirements – 125
VMs
38
Profile Characteristics Value
Number of virtual machines 125
Virtual machine OS Windows Server 2012 Datacenter edition
Processors per virtual machine 1
Number of virtual processors per
physical CPU core 4
RAM per virtual machine 2 GB
Average storage available for each
virtual machine 100 GB
Average IOPS per virtual machine 25 IOPS
Number of LUNs or NFS shares to
store virtual machine disks 1 or 2
Number of virtual machines per LUN
or NFS share 50
Disk and RAID type for LUNs or NFS
shares
RAID 5, 600 GB, 15k rpm, 3.5-inch SAS
disks
EMC VSPEX Hardware Requirements – 125
VMs
39
Component Configuration
Lenovo RD 630 Intel E5-2680 Dual socket 8 core per socket with hyper threading achieves 32 logical cores per node
256G ram per node
2 x Qlogic 8362CNA per host using Ethernet drivers
RAID 1 boot disk for hypervisor 2 x 300 Gig SAS
IPMI enabled in bios using dedicated copper management port shares access with IPMI ip address
VMware vSphere Servers CPU 1 vCPU per virtual machine
4 vCPUs per physical core
For 125 virtual machines:
125 vCPUs
Minimum of 32 physical CPUs
Memory 2 GB RAM per virtual machine
2 GB RAM reservation per VMware vSphere host
For 125 virtual machines:
Minimum of 250 GB RAM
Add 2 GB for each physical server
Network (Block Storage systems) 2 x 10 GE NIC per server
2 Qlogic 8362 CNAs
NOTE: Add at least one additional server to the infrastructure beyond the minimum requirements to implement VMware vSphere High Availability (HA) functionality and to meet the listed
minimums.
Extreme Networks Infrastructure Minimum Switching Capacity for Block Storage:
2 x Extreme NetworksX670
2 x 10 GE ports per VMware vSphere Server
1 x 1 GE port per Control Station for management
2 ports per VMware vSphere server, for storage network
2 ports per SP, for storage data
Shared Infrastructure In most cases, a customer environment already has infrastructure services such as Active Directory (AD), DNS, and other services configured. The setup of these services is beyond the
scope of this document.
If implemented without existing infrastructure, the new minimum requirements are:
2 physical servers
16 GB RAM per server
4 processor cores per server
2 x 1 GE ports per server
NOTE: These services can be migrated into VSPEX post-deployment. However, they must exist before VSPEX can be deployed.
EMC VNX Series Storage Array Block Common:
1 x 1 GE interface per Control Station for management
1 x 1 GE interface per SP for management
2 front end ports per SP
system disks for VNX OE
For 125 virtual machines
EMC VNX 5300
60 x 600 GB 15k rpm 3.5-inch SAS drives
4 x 200 GB Flash drives
2 x 600 GB 15k rpm 3.5-inch SAS drives as hot spares
1 x 200 GB Flash drive as hot spare
EMC VSPEX Software Versions
40
Software Configuration
VNX OE for file: Release 7.0.100-2
OE for block: Release 31 (05.31.000.5.704)
EMC VSI for VMware
vSphere
Version 5.1
Virtual Machine
Base Operating System Microsoft Windows Server 2008 R2
VDBench 5.0.2
Note: VDBench was used to validate this solution.
It is not a required component for production.
Extreme + EMC VSPEX Software Versions r
41
Software Configuration
VMware vSphere
vSphere Server 5.1 Enterprise Edition
vCenter Server 5.1 Standard Edition
Operating system for vCenter
Server
Windows Server 2008 R2 SP1 Standard Edition
NOTE: Any operating system that is supported for vCenter can be used.
Microsoft SQL Server Version 2008 R2 Standard Edition
NOTE: Any supported database for vCenter can be used.
EMC VNX
VNX OE for Block 05.32.000.3.770
EMC VSI for VMware vSphere:
Unified Storage Management
5.4
EMC VSI for VMware vSphere:
Storage Viewer
5.4
EMC PowerPath /VE 5.8
Virtual Machines (used for validation – not required for deployment)
Base Operating System Microsoft Window Server 2012 datacenter edition
Network Switching
Extreme Networks Summit
Switches
15.3
EMC VSPEX Virtualization Requirements –
125 VMs
42
Component Configuration
VMware vSphere
Servers
CPU 1 vCPU per virtual machine
4 vCPUs per physical core
For 125 virtual machines:
125 vCPUs
Minimum of 32 physical CPUs
Memory 2 GB RAM per virtual machine
2 GB RAM reservation per VMware vSphere
host
For 125 virtual machines:
Minimum of 250GB RAM
Add 2 GB RAM for each physical server
Network Block
2 x 10 GE NICs per server
2 HBA or CNAs per server
NOTE: Add at least one additional server to the infrastructure beyond the minimum requirements to
implement VMware vSphere High Availability (HA) functionality and to meet the listed minimums.
EMC VSPEX Network Requirements – 125
VMs
43
Component Configuration
Network
Infrastructure
Minimum
switching
capacity
Block
2 Extreme Networks X670 physical switches
2 x 10 GE ports per VMware vSphere server
1 x 1 GE port per Control Station for
management
2 ports per SP, for storage data
EMC VSPEX – Block Storage Requirements
– 125 VMs
44
Component Configuration
EMC VNX
series Storage
Array
Block
Common:
1x1 GE interface per control station for
management
1x1 GE interface per SP for management
2 front-end ports per SP
System disks for OE
For 125 virtual machines:
EMC VNX 5300
60 x 600 GB 15k rpm 3.5-inch SAS drives
4 x 200 GB Flash drives
2 x 600 GB 15k rpm 3.5-inch SAS drives as
hot spares
1 x 200 GB Flash drive as a hot spare