Upload
willis-fowler
View
222
Download
4
Embed Size (px)
Citation preview
Windows Server 2012 NIC Teaming and Multichannel SolutionsRick ClausSr. Technical Evangelist@RicksterCDNhttp://RegularITGuy.com
WSV321
Agenda - Reliability is job one!
NIC TeamingOverviewConfiguration choicesManaging NIC TeamingDemo
SMB MultichannelOverviewSample ConfigurationsTroubleshootingDemo
What do NIC Teaming and SMB Multichannel have in common?
Reliability is job oneNIC Teaming provides protection against failures in the hostSMB Multichannel provides multi-path protections
More bandwidth is always a good thingNIC Teaming and SMB Multichannel both provide bandwidth aggregation when possible
NIC Teaming and SMB Multichannel work together!
NIC Teaming
What is NIC Teaming?
Also known as…..NIC BondingLoad Balancing and Failover (LBFO). . . Other things
The combining of two or more network adapters so that the software above the team perceives them as a single adapter that incorporates failure protection and bandwidth aggregation.
. . . And?
NIC teaming solutions also provide per-VLAN interfaces for VLAN traffic segregation
Why use Microsoft’s NIC Teaming?
Vendor agnostic – anyone’s NICs can be added to the teamFully integrated with Windows Server 2012 Lets you configure your teams to meet your needsServer Manager-style UI that manages multiple servers at a time Microsoft supported – no more calls to NIC vendors for teaming support or getting told to turn off teamingTeam management is easy!
NIC teaming dismantledand vocabulary lesson
Team members--or-- NetworkAdapters
Team
Team Interfaces,Team NICs, ortNICs
Team connection modes
Switch independent modeDoesn’t require any configuration of a switchProtects against adjacent switch failures
Switch dependent modesGeneric or static teamingIEEE 802.1ax teaming
Also known as LACP or 802.3ad
Requires configuration of the adjacent switch
Switch dependent team
Switch independent team
Load distribution modes
Address Hash – comes in 3 flavors4-tuple hash: (Default distribution mode) uses the RSS hash if available, otherwise hashes the TCP/UDP ports and the IP addresses. If ports not available, uses 2-tuple instead.2-tuple hash: hashes the IP addresses. If not IP traffic uses MAC-address hash instead.MAC address hash: hashes the MAC addresses.
Hyper-V portHashes the port number on the Hyper-V switch that the traffic is coming from. Normally this equates to per-VM traffic.
Switch/Load Interactions (Summary)
Address Hash Hyper-V port
Switch Independent
Sends on all active members, receives on one member (primary member)
Sends on all active members, receives on all active members, traffic from same port always on same NIC
Switch Dependent
Sends on all active members, receives on all active members, inbound traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)
All outbound traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic
What modes am I using?
Switch/Load Interactions (SI/AH)
Address Hash Hyper-V port
Switch Independent
Sends on all active members, receives on one member (primary member)
Sends on all active members, receives on all active members, traffic from same port always on same NIC
Switch Dependent
Sends on all active members, receives on all active members, inbound traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)
All outbound traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic
Sends on all active members using the selected level of address hashing (defaults to 4-tuple hash).
Because each IP address can only be associated with a single MAC address for routing purposes, this mode receives inbound traffic on only one member (the primary member).
Best used when: a) Native mode teaming where switch diversity is a concern; b) Active/Standby mode c) Servers running workloads that are heavy outbound, light inbound workloads (e.g., IIS)
Switch/Load Interactions (SI/HP)
Address Hash Hyper-V port
Switch Independent
Sends on all active members, receives on one member (primary member)
Sends on all active members, receives on all active members, traffic from same port always on same NIC
Switch Dependent
Sends on all active members, receives on all active members, inbound traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)
All outbound traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic
Sends on all active the hashed Hyper-V switch port. Each Hyper-V pormembers using t will be bandwidth limited to not more than one team member’s bandwidth.
Because each VM (Hyper-V port) is associated with a single NIC, this mode receives inbound traffic for the VM on the same NIC it sends on so all NICs receive inbound traffic. This also allows maximum use of VMQs for better performance over all.
Best used for teaming under the Hyper-V switch when - number of VMs well-exceeds number of team members - restriction of a VM to one NIC’s bandwidth is acceptable
Switch/Load Interactions (SD/AH)
Address Hash Hyper-V port
Switch Independent
Sends on all active members, receives on one member (primary member)
Sends on all active members, receives on all active members, traffic from same port always on same NIC
Switch Dependent
Sends on all active members, receives on all active members, inbound traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)
All outbound traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic
Sends on all active members using the selected level of address hashing (defaults to 4-tuple hash).
Receives on all ports. Inbound traffic is distributed by the switch. There is no association between inbound and outbound traffic.
Best used for: - Native teaming for maximum performance and switch diversity is not required; or - teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver
Switch/Load Interactions (SD/HP)
Address Hash Hyper-V port
Switch Independent
Sends on all active members, receives on one member (primary member)
Sends on all active members, receives on all active members, traffic from same port always on same NIC
Switch Dependent
Sends on all active members, receives on all active members, inbound traffic may use different NIC than outbound traffic for a given stream (inbound traffic is distributed by the switch)
All outbound traffic from a port will go on a single NIC. Inbound traffic may be distributed differently depending on what the switch does to distribute traffic
Sends on all active members using the hashed Hyper-V switch port. Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth.
Receives on all ports. Inbound traffic is distributed by the switch. There is no association between inbound and outbound traffic.
Best used when: - Hyper-V teaming when VMs on the switch well-exceed the number of team members and - when policy calls for e.g., LACP teams and when an individual VM does not need to transmit faster than one team member’s bandwidth
Team interfaces (tNICs)
Team interfaces can be in one of two modes:Default mode: passes all traffic that doesn’t match any another team interface’s VLAN idVLAN mode: passes all traffic that matches the VLAN
Inbound traffic is always passed to at most one team interface
TEAM
VLAN=42
Default
(all but 42)
TEAM
VLAN=42
VLAN=99
Black
hole
TEAM
Default
Hyper-V switch
Team interface – at team creation
When a team is created it has one team interface. Team interfaces can be renamed like any other network adapter (Rename-NetAdapter cmdlet)Team interfaces show up in Get-NetAdapter outputOnly this first (primary) team interface can be put in Default mode
Team Interfaces - additional
Team interfaces created after initial team creation must be VLAN mode team interfacesTeam interfaces created after initial team creation can be deleted at any time (UI or PowerShell)
It is a violation of Hyper-V rules to have more than one team interface on a team that is bound to the Hyper-V switch
TEAM
Default
Hyper-V switch
Teams of one
A team with only one member (one NIC) may be created for the purpose of disambiguating VLANsA team of one has no protection against failure (of course)
TEAM
VLAN=42
VLAN=99
VLAN=13
VLAN=3995
Team members
Any physical Ethernet adapter can be a team member and will work as long as the NIC meets the Windows Logo requirementsTeaming of Infiniband, WiFi, WWAN, etc., adapters is not supportedTeams of teams are not supported
Team member roles
A team member may be active or standby.
Teaming in a VM is supported
Limited to Switch independent, Address Hash modeTeams of two team members are supported Intended/optimized to support teaming of SR-IOV VFs but may be used with any interfaces in the VMRequires configuration of the Hyper-V switch or failovers may cause loss of connectivity
Manageability
Intuitive, easy-to-use NIC Teaming UISo intuitive and powerful that some Beta customers are saying they don’t want to bother with learning the PowerShell cmdletsUI operates completely through PowerShell – uses PowerShell cmdlets for all operationsManages Servers (including Server Core) remotely from your Windows 8 client PC
Powerful PowerShell cmdletsObject: NetLbfoTeam (New, Get, Set, Rename, Remove)Object: NetLbfoTeamNic (Add, Get, Set, Remove)Object: NetLbfoTeamMember (Add, Get, Set, Remove)
Feature interactions
Feature Comments
RSS Programmed directly by TCP/UDP when bound to TCP/UDP.
VMQ Programmed directly by the Hyper-V switch when bound to Hyper-V switch
IPsecTO, LSO, Jumbo frames, all checksum offloads (transmit)
Yes – advertised if all NICs in the team support it
RSC, all checksum offloads (receive)
Yes – advertised if any NICs in the team support it
DCB Yes – works independently of NIC Teaming
RDMA, TCP Chimney offload
No support through teaming
SR-IOV Teaming in the guest allows teaming of VFs
Network virtualization Yes
Limits on NIC Teaming
Maximum number of NICs in a team: 32Maximum number of team interfaces: 32Maximum teams in a server: 32
Not all maximums may be available at the same time due to other system constraints
demo
NIC Teaming
SMB Multichannel
Multiple RDMA NICsMultiple 1GbE NICsSingle 10GbE RSS-capable NIC
SMB Server
SMB Client
SMB MultichannelMultiple connections per SMB session
Full ThroughputBandwidth aggregation with multiple NICsMultiple CPUs cores engaged when using Receive Side Scaling (RSS)
Automatic FailoverSMB Multichannel implements end-to-end failure detectionLeverages NIC teaming if present, but does not require it
Automatic ConfigurationSMB detects and uses multiple network paths
SMB Server
SMB Client
SMB Server
SMB Client
Sample Configurations
Multiple 10GbE in a NIC team
SMB Server
SMB Client
NIC Teaming
NIC Teaming
Switch10GbE
NIC10GbE
NIC10GbE
Switch10GbE
NIC10GbE
NIC10Gb
E
NIC10GbE
NIC10Gb
E
Switch1GbE
NIC1GbE
NIC1GbE
Switch1GbE
NIC1GbE
NIC1GbE
Vertical lines are logical channels, not cables
Switch10GbE/IB
NIC10GbE/
IB
NIC10GbE/
IB
Switch10GbE/IB
NIC10GbE/
IB
NIC10GbE/
IB
Switch10GbE
RSS
RSS
SMB Server
SMB Client
SMB Multichannel – Single 10GbE NIC
No failoverCan’t use full 10Gbps
Only one TCP/IP connectionOnly one CPU core engaged
No failoverFull 10Gbps available
Multiple TCP/IP connectionsReceive Side Scaling (RSS) helps distribute load across CPU cores
1 session, with Multichannel1 session, without Multichannel
Switch10GbE
NIC10GbE
NIC10GbE
SMB Server
SMB Client
Switch10GbE
NIC10GbE
NIC10GbE
CPU utilization per core
Core 1
Core 2
Core 3
Core 4
CPU utilization per core
Core 1
Core 2
Core 3
Core 4
RSS
RSS
RSS
RSS
1 session, with Multichannel1 session, without Multichannel
SMB Multichannel – Multiple NICs
No automatic failoverCan’t use full bandwidth
Only one NIC engagedOnly one CPU core engaged
Automatic NIC failoverCombined NIC bandwidth available
Multiple NICs engagedMultiple CPU cores engaged
SMB Server 1
SMB Client 1
Switch10GbE
SMB Server 2
SMB Client 2
NIC10GbE
NIC10GbE
Switch10GbE
NIC10GbE
NIC10GbE
Switch10GbE
Switch10GbE
NIC10GbE
NIC10GbE
NIC10GbE
NIC10GbE
SMB Server 1
SMB Client 1
Switch10GbE
SMB Server 2
SMB Client 2
NIC10GbE
NIC10GbE
Switch10GbE
NIC10GbE
NIC10GbE
Switch10GbE
Switch10GbE
NIC10GbE
NIC10GbE
NIC10GbE
NIC10GbE
RSS
RSS
RSS
RSS
RSS
RSS
RSS
RSS
SMB Multichannel Performance
Preliminary results using four 10GbE NICs simultaneously
Linear bandwidth scaling 1 NIC – 1150 MB/sec2 NICs – 2330 MB/sec3 NICs – 3320 MB/sec4 NICs – 4300 MB/sec
Leverages NIC support for RSS (Receive Side Scaling)
Bandwidth for small IOs is bottlenecked on CPU
512
1024
4096
8192
1638
4
3276
8
6553
6
1310
72
2621
44
5242
88
1048
576
0
1000
2000
3000
4000
5000
SMB Client Interface Scaling - Throughput1 x 10GbE 2 x 10GbE 3 x 10GbE 4 x 10GbE
I/O SizeM
B/s
ec
Data goes all the way to persistent storage. White paper provides full details.
See http://go.microsoft.com/fwlink/p/?LinkId=227841Preliminary results based on Windows Server “8” Developer
Preview
1 session, with NIC Teaming and MC
1 session, with NIC Teaming, no MC
SMB Multichannel + NIC Teaming
Automatic NIC failoverCan’t use full bandwidth
Only one NIC engagedOnly one CPU core engaged
Automatic NIC failover (faster with NIC Teaming)Combined NIC bandwidth available
Multiple NICs engagedMultiple CPU cores engaged
SMB Server 1
SMB Client 1
SMB Server 2
SMB Client 2NIC Teaming
NIC Teaming
NIC Teaming
NIC Teaming
Switch10GbE
Switch10GbE
NIC10GbE
NIC10GbE
NIC10GbE
NIC10GbE
Switch1GbE
NIC1GbE
NIC1GbE
Switch1GbE
NIC1GbE
NIC1GbE
SMB Server 2
SMB Client 1
Switch1GbE
SMB Server 2
SMB Client 2
NIC1GbE
NIC1GbE
Switch1GbE
NIC1GbE
NIC1GbE
Switch10GbE
Switch10GbE
NIC10GbE
NIC10GbE
NIC10GbE
NIC10GbE
NIC Teaming
NIC Teaming
NIC Teaming
NIC Teaming
1 session, with Multichannel1 session, without Multichannel
SMB Direct and SMB Multichannel
No automatic failoverCan’t use full bandwidth
Only one NIC engagedRDMA capability not used
Automatic NIC failoverCombined NIC bandwidth available
Multiple NICs engagedMultiple RDMA connections
SMB Server 2
SMB Client 2
SMB Server 1
SMB Client 1
SMB Server 2
SMB Client 2
SMB Server 1
SMB Client 1
Switch10GbE
Switch10GbE
R-NIC10GbE
R-NIC10GbE
R-NIC10GbE
R-NIC10GbE
Switch54GbIB
R-NIC54GbIB
R-NIC54GbIB
Switch54GbIB
R-NIC54GbIB
R-NIC54GbIB
Switch10GbE
Switch10GbE
R-NIC10GbE
R-NIC10GbE
R-NIC10GbE
R-NIC10GbE
Switch54GbIB
R-NIC54GbIB
R-NIC54GbIB
Switch54GbIB
R-NIC54GbIB
R-NIC54GbIB
SMB Multichannel – Not applicable
Switch1GbE
SMB Server
SMB Client
NIC1GbE
NIC1GbE
Switch1GbE
SwitchWireless
SMB Server
SMB Client
NIC1GbE
NICWireless
NIC1GbE
Switch1GbE
SMB Server
SMB Client
NIC1GbE
NIC1GbE
Switch10GbE
SMB Server
SMB Client
R-NIC10GbE
R-NIC10GbE
Switch1GbE
SMB Server
SMB Client
NIC1GbE
NIC1GbE
Single NIC configurations where full bandwidth is already available without MC
Configurations with different NIC type or speed
SMB Server
SMB Client
SwitchWireless
NICWireless
NICWireless
Switch10GbE
NIC10GbE
NIC10GbE
SwitchIB
R-NIC32GbIB
R-NIC32GbIB
Switch10GbE
R-NIC10GbE
R-NIC10GbE
RSS
RSS
SMB Multichannel Configuration Options
ThroughputFault
Tolerance for SMB
Fault Tolerance
for non-SMB
Lower CPU utilization
Single NIC (no RSS) ▲
Multiple NICs (no RSS) ▲▲ ▲
Multiple NICs (no RSS) + NIC Teaming
▲▲ ▲▲ ▲
Single NIC (with RSS) ▲Multiple NICs (with RSS) ▲▲ ▲
Multiple NICs (with RSS) + NIC Teaming ▲▲ ▲▲ ▲
Single NIC (with RDMA) ▲ ▲
Multiple NICs (with RDMA) ▲▲ ▲ ▲• Multichannel is on by default for SMB. • NIC Teaming is helpful for faster failover.• NIC Teaming is helpful for non-SMB traffic (mixed workloads, management).• NIC Teaming is not compatible with RDMA.
Troubleshooting SMB MultichannelPowerShell
Get-NetAdapterGet-SmbServerNetworkInterfaceGet-SmbClientNetworkInterfaceGet-SmbMultichannelConnection
Event LogApplication and Services Log, Microsoft, Windows, SMB Client
Performance CountersSMB Client Shares
demo
SMB Multichannel
Some Windows Storage Resources
Virtualizing Storage for Scale, Resiliency, and Efficiencyhttp://go.microsoft.com/fwlink/?LinkID=254536
How to Configure Clustered Storage Spaces in Windows Server 2012http://go.microsoft.com/fwlink/?LinkID=254538
Storage Spaces FAQhttp://go.microsoft.com/fwlink/?LinkID=254539
© 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to
be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS
PRESENTATION.