13
Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Performance, Virtualization, Data Center Features and SDN Evaluation THE BOTTOM LINE Source: Tolly, May 2014 © 2014 Tolly Enterprises, LLC Page 1 of 13 Tolly.com #214120 September 2014 Commissioned by Huawei Technologies Co., Ltd Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Layer 2 Throughput (as reported by Ixia IxNetwork 7.22.9.9.9EA) Table 1 Throughput t (% line rate) ) Frame Sizes 64-Byte 128-Byte 256-Byte 512-Byte 1024-Byte 1280-Byte 1518-Byte 9216-Byte CE7850-32Q-EI 32 x 40GbE ports - 100% 100% 100% 100% 100% 100% 100% CE6850-48S6Q-HI 48 x 10GbE + 6 x 40GbE ports 100% 100% 100% 100% 100% 100% 100% 100% CE6810-48S4Q-EI 48 x 10GbE + 4 x 40GbE ports 100% 100% 100% 100% 100% 100% 100% 100% CE5850-48T4S2Q-HI 48 x GbE + 4 x 10GbE + 2 x 40GbE ports 100% 100% 100% 100% 100% 100% 100% 100% CE5810-48T4S-EI 48 x GbE + 4 x 10GbE ports 100% 100% 100% 100% 100% 100% 100% 100% CE5810-24T4S-EI 24 x GbE + 4 x 10GbE ports 100% 100% 100% 100% 100% 100% 100% 100% 1 Supports FCoE (FCF, NPV, FSB modes and DCB) - with CE7800/6800 2 Supports virtualization with iStack - virtualize 16 physical switches into 1 logical switch, and SVF (vertical virtualization) - virtualize multiple homogeneous or heterogeneous physical switches into 1 logical switch with local forwarding on leaf nodes 3 Supports large layer 2 TRILL network with 512 nodes and active-active TRILL edge Huawei CloudEngine 7800/6800/5800 Series Data Center Switch: Note: The same type of ports were in snake traffic topology. For example, CE5810-24T4S-EI has 24 GbE ports in snake topology and four 10GbE ports in snake topology. EXECUTIVE SUMMARY Huawei CloudEngine 7800/6800/5800 series switches are 40GbE/10GbE/GbE data center switches developed by Huawei Technologies Co., Ltd. Huawei commissioned Tolly to evaluate their CE7800/6800/5800 series switch performance, virtualization capability, features and SDN functionalities. Tolly engineers verified that the CloudEngine switches provided high performance with low power consumption, virtualization capability with Huawei’s iStack and SVF technologies, as well as numerous data center features including VEPA, TRILL, FCoE (FCF, NPV, FSB modes and DCB), and Huawei nCenter interoperation with VMware vCenter. Tests also show that the CloudEngine switches supported OpenFlow SDN including interoperability with Huawei Agile Controller and the third party controller “Ryu”, L2/L3 line-rate forwarding, multiple flow tables, policy-based routing, and dynamic traffic engineering (TE). 4 Supports OpenFlow SDN with topology discovery, L2/L3 line-rate forwarding, multiple flow tables, policy-based routing and dynamic traffic engineering with interoperability with Huawei Agile Controller and third party SDN controllers

Tolly214120HuaweiDataCenterToRSwitches Virtualization iStack iStack is Huawei’s technology to virtualize multiple ToR switches to one logical switch. Tolly engineers verified that

Embed Size (px)

Citation preview

Huawei CloudEngine 7800/6800/5800 Series Data Center SwitchPerformance, Virtualization, Data Center Features and SDN Evaluation

THE BOTTOM LINE

Source: Tolly, May 2014

© 2014 Tolly Enterprises, LLC Page 1 of 13Tolly.com

#214120

September 2014Commissioned by

Huawei Technologies Co., Ltd

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Layer 2 Throughput(as reported by Ixia IxNetwork 7.22.9.9.9EA)

Table 1

Throughputt (% line rate))

Frame Sizes 64-Byte 128-Byte 256-Byte 512-Byte 1024-Byte 1280-Byte 1518-Byte 9216-Byte

CE7850-32Q-EI

32 x 40GbE ports- 100% 100% 100% 100% 100% 100% 100%

CE6850-48S6Q-HI

48 x 10GbE + 6 x 40GbE ports100% 100% 100% 100% 100% 100% 100% 100%

CE6810-48S4Q-EI

48 x 10GbE + 4 x 40GbE ports100% 100% 100% 100% 100% 100% 100% 100%

CE5850-48T4S2Q-HI

48 x GbE + 4 x 10GbE + 2 x 40GbE ports100% 100% 100% 100% 100% 100% 100% 100%

CE5810-48T4S-EI

48 x GbE + 4 x 10GbE ports100% 100% 100% 100% 100% 100% 100% 100%

CE5810-24T4S-EI

24 x GbE + 4 x 10GbE ports100% 100% 100% 100% 100% 100% 100% 100%

1 Supports FCoE (FCF, NPV, FSB modes and DCB) -

with CE7800/6800

2 Supports virtualization with iStack - virtualize 16

physical switches into 1 logical switch, and SVF

(vertical virtualization) - virtualize multiple

homogeneous or heterogeneous physical switches

into 1 logical switch with local forwarding on leaf

nodes

3 Supports large layer 2 TRILL network with 512

nodes and active-active TRILL edge

Huawei CloudEngine 7800/6800/5800 Series Data Center

Switch:

Note: The same type of ports were in snake traffic topology. For example, CE5810-24T4S-EI has 24 GbE ports in snake topology and four 10GbE ports in snake topology.

EXECUTIVE SUMMARYHuawei CloudEngine 7800/6800/5800 series switches are 40GbE/10GbE/GbE

data center switches developed by Huawei Technologies Co., Ltd.

Huawei commissioned Tolly to evaluate their CE7800/6800/5800 series

switch performance, virtualization capability, features and SDN

functionalities.

Tolly engineers verified that the CloudEngine switches provided high

performance with low power consumption, virtualization capability with

Huawei’s iStack and SVF technologies, as well as numerous data center

features including VEPA, TRILL, FCoE (FCF, NPV, FSB modes and DCB), and

Huawei nCenter interoperation with VMware vCenter.

Tests also show that the CloudEngine switches supported OpenFlow SDN

including interoperability with Huawei Agile Controller and the third party

controller “Ryu”, L2/L3 line-rate forwarding, multiple flow tables, policy-based

routing, and dynamic traffic engineering (TE).4 Supports OpenFlow SDN with topology discovery,

L2/L3 line-rate forwarding, multiple flow tables,

policy-based routing and dynamic traffic

engineering with interoperability with Huawei

Agile Controller and third party SDN controllers

Test ResultsTolly engineers benchmarked the

performance and feature set of a range of

Huawei CloudEngine 7800/6800/5800

Series Data Center top-of-rack (ToR)

Switches outfitted with Gigabit Ethernet,

10GbE and 40GbE ports.

The feature evaluation included:

virtualization, data center functionality and

OpenFlow capabilities. Test results are

summarized below and detailed in the Test

Setup and Methodology section. See Table

4 for the list of all verified items.

Performance

Layer 2 Throughput &

Latency

For each device under test, the Layer 2

throughput was measured individually

across a range of frame sizes from 64-byte

through 9216-bytes.

Testing encompassed combinations of

Gigabit Ethernet, 10GbE and 40GbE ports

depending upon the device and model. In

all cases, traffic of a particular topology was

snaked from port to port.

As shown in Table 1, all models tested of

the 5800/6800 delivered line rate at every

frame size tested from 64-byte to 9216-

byte jumbo frames. The 7850 outfitted with

32 40GbE ports delivered line rate

throughput at all frame sizes tested from

128-byte to 9216-byte frames.

Tolly engineers measured the latency at the

same frame sizes in both 40GbE and 10

GbE configurations.

In tests of 40GbE ports on the CE7850

switch, latency ranged from 0.60 μs to 0.73

μs. In tests of 10GbE ports on the CE6850

switch, latency ranged from 0.87 μs to 0.95

μs. In tests of 10GbE ports on CE6810

switch, latency ranged from 0.81 μs to 1.37

μs. See Table 2 for detailed results.

Power Consumption

To assist network architects in determining

operational costs of the data center

switches, Tolly engineers measured the

power consumption of the devices.

Engineers benchmarked var ious

combinations of ports across the

CloudEngine 7800/6800/5800 family. Tests

were carried out according to the ATIS

recommendations and the results can be

found In Table 3.

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 2 of 13Tolly.com

TestedMay2014

Huawei

Technologies, Co.,

Ltd

CloudEngine

7800/6800/5800

Series Data Center

Switches

Performance

Evaluation and

Feature Validation

Source: Tolly, May 2014

Huawei CloudEngine 7800/6800 Series Data Center Switch Layer 2 Latency(as reported by Ixia IxNetwork 7.22.9.9.9EA and Spirent TestCenter)

Table 2

Latenncy (μs)

Frame Sizes 64-Byte 128-Byte 256-Byte 512-Byte 1024-Byte 1280-Byte 1518-Byte 9216-Byte

CE7850-32Q-EI

40GbE port 1 to port 2 - Cut-through0.60 0.62 0.63 0.68 0.73 0.73 0.73 0.73

CE6850-48S6Q-HI

10GbE port 1 to port 2 - Store-and-forward0.87 0.87 0.93 0.94 0.95 0.94 0.94 0.92

CE6810-48S4Q-EI

10GbE port 1 to port 2 - Cut-through0.81 0.86 0.96 1.16 1.38 1.37 1.38 1.37

Note: Line-rate traffic was used to test. Cut-through latency measured FIFO latency while store-and-forward latency measured LIFO latency. Thus, store-and-forward results reported here do not include the time required to store the frame.

Features

Virtualization

iStack

iStack is Huawei’s technology to virtualize

multiple ToR switches to one logical switch.

Tolly engineers verified that 16

CE6850-48T4Q-EI switches were stacked

with a ring or line topology using the iStack

technology.

Super Virtual Fabric

Super Virtual Fabric (SVF) is Huawei’s

technology for vertical virtualization which

can virtualize the access switches and core/

aggregation switches to function as one

logical switch.

CE6850 switches were stacked together

using iStack and served as the aggregation

switch. Then, the stacked switch was

virtualized with multiple CE5810 switches

which served as access switches. See Figure

1 for the topology.

Tolly engineers verified that switches in SVF

supported local forwarding on the leaf

nodes and that the stacking links

supported link aggregation and load

balancing.

Tolly engineers swapped all CE5810

switches in the test bed with CE6810

switches and verified the same features.

Data Center Features

With the use of server virtualization and

cloud computing in data centers,

traditional networks face challenges

including Layer 2 network scalability issues,

limited 4,094 VLANs, increased demands

on switch MAC tables, network

requirements for FCoE traffic, and difficulty

of enforcing network policies on virtual

machines (VMs) while they “live migrate” to

different hosts or even different data

centers.

Tolly engineers verified a few features on

Huawei CE7800/6800/5800 switches to

solve these problems. TRILL was verified to

expand the Layer 2 network. DCB features

were verified to provide lossless Ethernet

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 3 of 13Tolly.com

Source: Tolly, May 2014

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Power Consumption(as reported by Chroma Programmable AC Source 6560)

Table 3

Power Consummption (Watts)

0% Traffic 30% Traffic 100% TrafficATIS Weighted

Power

ATIS TEER

(Gbps/Watts)

ATIS Weighted

Watts/Gbps

CE7850-32Q-EI

32 x 40GbE ports277.7 290.6 320.5 292.3 4.38 0.23

CE6810-48S4Q-EI

48 x 10GbE + 4 x 40GbE ports124 130 136 130.0 4.92 0.20

CE5850-48T4S2Q-HI

48 x GbE + 4 x 10GbE + 2 x 40GbE ports110.6 118.3 128.3 118.5 1.42 0.71

CE5810-48T4S-EI

48 x GbE + 4 x 10GbE ports69.9 70.2 72.0 70.4 1.25 0.80

CE5810-24T4S-EI

24 x GbE + 4 x 10GbE ports49.8 50.3 50.7 50.3 1.27 0.79

Note: 1. Switches were fully loaded with fans and power supplies.2. White cells are measured results. Green cells are calculated results.3. Alliance for Telecommunications Industry Solutions (ATIS) weighted power = (power consumption with 0% traffic) x 0.1 + (power consumption with 30% traffic) x 0.8 + (power consumption with 100% traffic) x 0.1. ATIS Telecommunication Energy Efficiency Ratio (TEER) = (maximum demonstrated throughput) / (ATIS weighted power). 4. ATIS weighted Watts/Gbps = 1 / (ATIS TEER).5. iMIX traffic (5% 49 bytes frames, 20% 576 bytes frames, 42% 1,500 bytes frames and 33% 49-1500 bytes frames) was used.

for FCoE. VEPA was verified to direct all

network traffic of VMs to the physical

switch for easier management. Huawei

nCenter and VSI Manager were evaluated

to provide network policy migration

following VMs’ live migration.

Tolly engineers also verified that Huawei

CE7850 and CE6850HI switches could act

as the VXLAN overlay network tunnel

endpoint and gateway.

Tolly engineers further verified that the

Huawei CE12800 data center core switch

supported Huawei’s Ethernet Virtual

Network (EVN) to provide L2 connectivity

across the L3 WAN network. This feature is

discussed in the Tolly Test Report #214119.

TRILL & High-Availability

Tolly engineers verified that Huawei

CloudEngine switches supported

transparent interconnection of lots of links

(TRILL) with a large Layer 2 TRILL network

that consisted of 512 nodes. Additionally,

engineers verified support for high

availability with active-active TRILL edge.

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 4 of 13Tolly.com

Source: Tolly, May 2014 Table 4

Huawei CloudEngine 7800/6800/5800 Series Data Center Switch Tolly Verified Performance and Features

Tolly Certified Performance and Features

Performance

Line-rate forwarding

10GbE port latency (cut-through) as low as 0.8μs, 40GbE port latency (cut-through) as low as 0.6μs

Low power consumption

Virtualization

iStack, virtualize 16 physical switches into one logical switch

Super Virtual Fabric (SVF) vertical virtualization (virtualize aggregation and access switches into one) with local

forwarding on leaf nodes

Data Center Featuress

Transparent Interconnection of Lots of Links (TRILL): support a large L2 network with up to 512 nodes,

High availability with TRILL: active-active TRILL edge - two nodes as a DFS group with one pseudo TRILL nickname

FCoE (FCF, NPV, FSB modes)

Data Center Bridging (DCB) - PFC, ETS, DCBX

(not include CE5800 switches)

802.1Qbg Virtual Edge Port Aggregator (VEPA)

Network Policy Migration: Controlled by Huawei nCenter, interoperate with VMware vCenter to implement in-service

policy migration with virtual machine live migration

Network Policy Migration: Controlled by Huawei VSI Manager, interoperate with VMware vCenter and IBM 5000V

virtual distributed switch using VEPA to implement in-service policy migration with virtual machine’s live migration

Underlying network for VXLAN or NVGRE overlay network

CE7850 and CE6850HI acted as the VXLAN Tunnel Endpoint (VTEP)

CE7850 and CE6850HI acted as the VXLAN overlay network gateway

OpenFlow SDN

Controlled by the Huawei Agile Controller or Third Party SDN Controllers (tested Ryu)

Topology discovery, L2/L3 line-rate forwarding, multiple flow tables, policy-based routing, dynamic traffic

engineering (TE)

FCoE/DCB

Engineers verified support for a key set of

data center functionality in the areas of

Fibre Channel over Ethernet (FCoE) with

data center bridging (DCB). The Huawei

CloudEngine switches supported FCF, NPV

and FSB modes for FCoE. Engineers also

verified the interoperability between the

Huawei CloudEngine switches and the

CNAs from major vendors including

Emulex, QLogic and Intel. See Table 4 and

the Test Methodology section for additional

details.

VEPA and Network Policy Migration

Tolly engineers verified interoperability

between Huawei nCenter and VMware

vCenter to implement in-service policy

migration with virtual machine migration.

When a virtual machines was live migrated

to another host, the network policy (ACL

rules and QoS policies) for the virtual

machine group also migrated to the proper

different switch or port.

Engineers also verified interoperability

between Huawei VSI Manager, VMware

vCenter and the IBM 5000V virtual

distributed switch using 802.1Qbg Virtual

Edge Port Aggregator (VEPA) to implement

in-service policy migration with virtual

machine live migration.

Overlay Network - VXLAN and

NVGRE

Two major data center overlay network

technologies are Virtual Extensible LAN

(VXLAN) and Network Virtualization using

Generic Routing Encapsulation (NVGRE).

The overlay network technologies can

provide Layer 2 connectivity for tunnel

endpoints (e.g virtual switches) over a

physical Layer 3 network. It can expand the

Layer 2 network for the virtual machines,

overcome the limitation of VLAN numbers

by adding a new Layer 2 network segment

header (VNI for VXLAN and VSI for NVGRE),

and reduce the demands of the MAC tables

on the physical switches.

As the underlying physical network only

needs to provide Layer 3 connectivity for

the tunnel endpoints (e.g. virtual switches),

the physical switches do not need to

change much. Huawei CE7800/6800/5800

acted as the underlying network in the

VXLAN and NVGRE overlay network

environment during the test.

To allow the virtual environment using

VXLAN or NVGRE to communicate with

other non-VXLAN or non-NVGRE endpoints

as well as provide Layer 3 connectivity for

VXLAN or NVGRE endpoints in different

network segments, a gateway is needed.

Tolly engineers verified that the Huawei

CE7850 and CE6850-HI switches could act

as the gateway for the VXLAN overlay

network while Huawei CE12800 switch

could act as the gateway for the VXLAN or

NVGRE overlay network

OpenFlow Software

Defined Networking

Tolly engineers verified various capabilities

in the area of software defined networking

(SDN).

Topology Discovery

The Huawei Agile Controller supports

displaying the whole network topology by

the LLDP topology discovery capability

from the CE7800/6800/5800 switches.

Third-Party SDN Controller with

Multiple Flow Tables

In addition to verifying SDN management

via the Huawei Agile Controller, engineers

also verified that the Huawei devices could

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 5 of 13Tolly.com

Source: Tolly, May 2014 Figure 1

Huawei Super Virtual Fabric (SVF) Test Bed

be managed by a third-party controller - in

this test, the Ryu SDN Framework. Multiple

flow tables were applied to the

CloudEngine switches from the Ryu

controller.

Flow Table Performance

Engineers verified that the CloudEngine

switches delivered line-rate Layer 2 and

Layer 3 performance with two 10GbE ports

using SDN-based flow tables.

Policy-based Routing

Tolly engineers verified that policy controls

could be used to route traffic through

specific switches as configured via SDN.

Dynamic Traffic Engineering

Tolly engineers verified that dynamic traffic

engineering (TE) could be used to adjust

the forwarding path dynamically based on

traffic load.

Test Setup &

Methodology

Performance

Throughput

CE7850-32Q-EI, CE6850-48S6Q-HI,

CE6810-48S4Q-EI, CE5850-48T4S2Q-HI,

CE5810-48T4S-EI and CE5810-24T4S-EI were

tested using the RFC2544 throughput test

suite in the Ixia IxNetwork. For each device

under test (DUT), all available same type of

ports were in snake topology. See Table 1

for results.

Latency

Cut-through latency (FIFO) of the

CE7850-32Q-EI and CE6810-48S4Q-EI was

measured as port to port with line-rate

traffic using the RFC2544 latency test suite

in the Ixia IxNetwork. Store-and-forward

latency (LIFO) was measured for the

CE6850 switch using the RFC2544 latency

test suite in the Spirent TestCenter. Thus,

store-and-forward results reported here do

not include the time required to store the

frame. See Table 2 for results.

Power Consumption

The power consumption was measured

using the same traffic topology as the

throughput test. According to the ATIS

standard for data center switches, the

power consumption of 0% traffic, 30%

traffic and 100% traffic were measured

using iMIX traffic (5% 49 bytes frames, 20%

576 bytes frames, 42% 1,500 bytes frames

and 33% 49-1500 bytes frames). Then the

ATIS weighted power, ATIS TEER and ATIS

weighted Watts/Gbps of the switch were

calculated. See the notes of Table 3 for

additional details.

ATIS standard refers to the “Energy

Efficiency for Telecommunications

E q u i p m e n t : M e t h o d o l o g y f o r

Measurement and Reporting for Router

and Ethernet Switch Products” document

published by Alliance for Telecommunications

Industry Solutions (https://www.atis.org/

docstore/product.aspx?id=25324).

Virtualization

iStack

iStack is Huawei’s technology to virtualize

multiple ToR switches to one logical switch.

Tolly engineers verified that 16

CE6850-48T4Q-EI switches were stacked as

a ring or line topology using the iStack

technology.

SVF

Super Virtual Fabric (SVF) is Huawei’s

technology for vertical virtualization which

can virtualize the access switches and core/

aggregation switches to be one logical

switch.

CE6850 switches were stacked together

using iStack and acted as the aggregation

switch. Then the stacked switch was

virtualized with multiple CE5810 switch

which acted as access switches. See Figure

1 for the topology.

Tolly engineers verified that switches in SVF

support local forwarding. Also, the stacking

links between the aggregation switch and

the access switches supported load

balancing.

Tolly engineers swapped all CE5810

switches in the test bed into CE6810

switches and verified the same features.

Data Center Features

TRILL

Transparent Interconnection of Lots of

Links (TRILL) uses Layer 3 routing

techniques to build a large Layer 2

network. Engineers used Spirent TestCenter

to simulate one TRILL node on one port

and 510 TRILL nodes on the other port.

Both ports were connected to the

CloudEngine switch under test. Tolly

engineers verified that the switch under

test showed all 511 TRILL neighbors.

Together with the switch under test, the

whole TRILL network included 512 nodes.

CE7850, CE6850 and CE5850 switches were

all tested.

Engineers also configured two CE6850

switches in active-active status as a DFS

group with a pseudo TRILL nickname for

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 6 of 13Tolly.com

high availability and verified the fast

failover for switch and link failures.

FCoE

Tolly engineers verified that Huawei

CE6850 could act in FCF, NPV or FSB mode

for FCoE. CNAs from major vendors

including Emulex, QLogic and Intel were

used during the test to verify Huawei

CE6850 switch’s interoperability with them.

FCoE - FCF Mode

When an Fibre Channel over Ethernet

(FCoE) switch operates in the FCoE

Forwarder (FCF) mode, it encapsulates FC

frames in Ethernet frames and uses FCoE

virtual links to simulate physical FC links. It

provides standard FC switching capabilities

and features on a lossless Ethernet

network.

Tolly engineers verified that the

CE6850-48S4Q-EI switch supported FCF

mode single-hop as well as multi-hop. In

the single-hop test, only one CE6850

switch was used to connect the SAN

storage and the physical server. In the

multi-hop test, two CE6850 were used to

connect the SAN storage and the physical

server in serial. In both tests, the physical

server can mount and access the LUNs in

the SAN storage without any problem.

One Emulex OneConnect OCe11102-FM

dual-port 10G/s FCoE Converged Network

Adapter (CNA) was used on the physical

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 7 of 13Tolly.com

Huawei CloudEngine Switch Performance Test Bed

Source: Tolly, May 2014 Figure 2

CE7850-32Q-EI

CE6850-48S4Q-EI

CE5850-48T4S2Q-HI

CE5810-48T4S-EI

CE5810-24T4S-EI

Ixia XM12 IP Performance Tester

Note: The same type of ports on one switch were in snake topology: All available 40GbE ports in snake topology, all available 10GbE ports in snake topology and all GbE ports in snake topology

CE6850-48S6Q-HI

CE6810-48S4Q-EI

server to connect to the CE6850 switch

under test.

FCoE - NPV Mode

An Fibre Channel Storage Area Network (FC

SAN) needs a large number of edge

switches that are directly connected to

nodes (servers and storage). FCoE switches

in FCoE N-port Virtualization (NPV) mode

can expand the number of switches in an

FC SAN.

Fabric is the main network with FCoE

switches in FCF mode. NPV switches reside

between nodes and core FCoE FCF

switches on the edge of the fabric. NPV

switches forward FCoE traffic from it’s

connected nodes to the core FCF switch.

The NPV switch appears as an FCF switch to

nodes and as a node to the core FCF switch.

Tolly engineers verified that the Huawei

CE6850 switch could work in NPV mode

and interoperate with the Brocade

VDX6700 FCoE switch in FCF mode. One

physical server was connected to the

Huawei CE6850 switch while one SAN

storage was connected to the Brocade

VDX6700 FCoE switch. FCoE traffic was

forwarded to the Brocade VDX6700 switch

by CE6850 and the physical server access

the SAN storage without any problem.

One QLogic 8200 series 10Gbps CNA was

used on the physical server to connect to

the Huawei CE6850 switch.

FCoE - FSB Mode

FCoE switch in FCoE Initialization Protocol

Snooping Bridge (FSB) mode does not

support FC protocol itself. It uses FCoE

Initialization Protocol (FIP) snooping to

prevent attacks.

One port of Spirent TestCenter simulating a

server (FCoE initiator) was connected to

one Huawei CE6850 switch in FSB mode.

The CE6850 switch was then connected to

one Huawei CE12800 switch in FCF mode.

Another port of Spirent TestCenter was

connected to the CE12800 switch on the

other end to simulate one SAN storage

(FCoE target).

The FCoE session was created between the

simulated FCoE initiator and target. The

CE6850 then stores the FCoE session

information by FIP snooping.

Tolly engineer then verified that only FCoE

traffic matching the MAC address of the

simulated SAN storage could be forwarded

to the FCoE initiator. FCoE traffic not

matching the MAC address of the FCoE

session or other types of non-FCoE traffic

matching the MAC address can neither be

passed to the FCoE initiator by the CE6850

switch in FSB mode.

Engineers also used real physical server and

SAN storage with the CE6850 in FSB mode

and CE12800 in FCF mode to verify the

connectivity. The physical server could

access the storage without any problem.

One Intel CNA was used on the physical

server to connect to the CE6850 switch.

Data Center Bridging (DCB)

Data Center Bridging (DCB) is a suite of IEEE

standards to provides many advantages for

data centers such as lossless Ethernet for

FCoE traffic. Tolly engineers verified DCBX,

PFC and ETS which are components of DCB

for the Huawei CE6850 switch.

DCB - DCBX

Data Center Bridging Capability Exchange

protocol is an extension of Link Layer Data

Protocol (LLDP) to discover peers and

exchange configuration information

between DCB-compliant switches.

Tolly engineers verified that the Huawei

CE6850 switch could use DCBX protocol to

negotiate ETS and PFC settings. Only when

the ETS and PFC settings match between

the CE6850 switch and the connected

Spirent TestCenter, ETS and PFC could

function.

DCB - PFC and ETS

When Priority-based Flow Control (PFC) is

enabled on a switch port for inbound traffic

with certain 802.1p priorities, the port

sends back-pressure signals to reduce the

sending rate of those priorities from the

upstream device if network congestion

occurs.

Enhanced Transmission Selection (ETS)

implement QoS based on Priority Groups

(PG). In the configuration of ETS, engineers

need to match 802.1p priority 0 to 7 to PG0,

PG1 and PG15 offered by the Huawei

CE6850 switch. PG0, PG1 and PG15 use PQ

+DRR. PG15 uses Priority Queuing (PQ) to

have limitless bandwidth and host

management or IPC traffic. PG0 and PG1

use Deficit Round Robin (DRR) to have

weighted bandwidth left by PG15.

In the test, engineers enabled PFC for

priority 3 (default priority for FCoE traffic) to

make sure FCoE traffic will not have frame

loss. Then priority 0, 1, 2, 4 and 5 were

assigned to PG0; priority 3 (FCoE traffic) was

assigned to PG1; priority 6 and 7 (IPC traffic)

was assigned to PG15. The weight ratio for

PG1 (FCoE traffic) and PG0 (LAN traffic) was

set to 3:2.

There was one 10GbE link between the

CE6850 switch under test and the receiving

port of the Spirent TestCenter. Engineers

sent 8Gbps IPC traffic with 802.1p priority 7,

4Gbps FCoE traffic with priority 3 and 2Gbps

regular Ethernet traffic with priority 0 using

two 10GbE ports of Spirent TestCenter to

bypass the bandwidth of the 10GbE link at

the receiving end.

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 8 of 13Tolly.com

Tolly engineers then verified that the

receiving end received all 8Gbps IPC traffic

which was in PG15 to have limitless

bandwidth, 1.2Gbps FCoE traffic which was

in PG1 and 0.8Gbps regular Ethernet traffic

which was in PG0. The receiving rate of

1.2Gbps FCoE traffic and 0.8Gbps regular

Ethernet traffic matched the configured 3:2

weight ratio for PG1 and PG0.

Tolly engineers also verified that the

sending rate from TestCenter’s sending

ports of the FCoE traffic reduced to be as

1.2Gbps as total (0.6Gbps from each port)

because PFC was enabled. Thus the FCoE

traffic had 0 frame loss. The sending rate of

the regular Ethernet traffic with priority 0

was still 2Gbps and had frame loss because

priority 0 was not enabled for PFC.

Network Policy Migration

Huawei provides two tools to help migrate

the network policy (ACL and QoS rules)

along with virtual machines’ live migration

in VMware vSphere environment.

Network Policy Migration - nCenter

The first tool is the nCenter component

under Huawei ’s eSight net work

management application. Engineers first

configured nCenter to manage the

CE12800 and the CE6850 switch under test.

In nCenter, engineers then configured the

I P a d d r e s s o f V M w a r e v Ce n t e r

5.0.0_623373 which managed two

VMware ESXi 5.0.0_623860 hosts. So

nCenter could use vCenter’s APIs to

interoperate with vCenter and migrate

network policies on Huawei CloudEngine

switches along with the virtual machines.

See Figure 3 for the test bed.

One ACL policy (denying destination IP as

an outbound policy) were assigned to the

VM group which contains one VM on the

first ESXi host. Engineers live migrated the

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 9 of 13Tolly.com

Source: Tolly, May 2014 Figure 3

Huawei CloudEngine Switches Network Policy Migration Test Bed A

Huawei CloudEngine Switches Network Policy Migration Test Bed B

Source: Tolly, May 2014 Figure 4

5000V refers to IBM 5000V virtual distributed switch

VM to another ESXi host and used ping to

check the connectivity.

The VM was accessible all the time on the

network with the traffic not matching the

ACL (deny policy). Tolly engineers verified

that the ACL policy were migrated and

enforced to the proper different switch

with the VM’s migration.

Network Policy Migration - VSI

Manager with VEPA

The method above can control the ACL and

QoS policies between different hosts but

cannot control the traffic between VMs on

the same host and in the same VLAN

because the traffic only goes into the

vSwitch on the host without passing to the

physical Huawei CE6850 switch.

Virtual Edge Port Aggregator (VEPA)

standard is developed to direct all network

traffic of any virtual machine to the physical

switch. So the ACL and QoS policies on the

physical switch can control all network

traffic of any virtual machine. The built-in

vSwitch in VMware ESXi host does not

support VEPA natively. So, engineers

installed the IBM distributed virtual switch

5000V on the VMware ESXi hosts and

enabled VEPA for the 5000V. Then

engineers configured Huawei VSI Manager

to work with IBM 5000V, VMware vCenter

and the CE6850 switch under test. See

Figure 4 for the test bed.

One ACL policy (denying destination IP as

an outbound policy) were assigned to the

VM group which contains one VM on ESXi

Host B. Engineers live migrated the VM

from ESXi Host B to ESXi Host C and using

ping to check the connectivities.

Tolly engineers verified that the ACL policy

were migrated and enforced to the proper

different port of the CE6850 switch with

the VM’s migration.

Overlay Network Gateway - VXLAN

As shown in Figure 5, one Huawei CE7850

switch and one Huawei CE6850HI switch

(CE7850-2 and CE6850-2 in the test bed)

acted as the VXLAN Tunnel End Points

(VTEP). The CE7850 or CE6850HI switch at

top acted as the VXLAN overlay network

gateway.

Engineers first verified the Layer 2 and

Layer 3 connectivity in the VXLAN network.

When VTEP1 and VTEP2 were with the

same network segment header VNI and the

two Spirent TestCenter ports were in the

same subnet, the TestCenter ports were

able to communicate with each other

using the VXLAN network. When VTEP1

and VTEP2 were with different network

segment header VNIs (in two different

VXLAN network segments) and the two

Spirent TestCenter ports were in different

subnets, the CE7850 or CE6850 switch at

top acted as the gateway and provided

VXLAN network Layer 3 connectivity

between VTEP-1 and VTEP-2 so the two

TestCenter ports could always communicate.

Engineers then verified the Layer 2 and

Layer 3 connectivity between the VXLAN

overlay network and the traditional

network. CE6850HI-1 in the test bed

simulated the traditional network out of

the VXLAN overlay network. The CE7850 or

the CE6850HI switch at the top could use

its port connected to CE6850-1 as a VTEP

and provide Layer 2 and Layer 3

connectivity between the VXLAN overlay

network and the traditional network.

OpenFlow SDN

SDN - Topology Discovery

Two Huawei CE6850 switches and one

Huawei CE7850 switch were configured to

connect to the Huawei Agile Controller.

Wireshark was used to capture the traffic

between the controller and the switches.

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 10 of 13Tolly.com

Source: Tolly, May 2014 Figure 5

Huawei CloudEngine Switches VXLAN Gateway Test Bed

Tolly engineers verified that the Hello,

Features_Request, Features_Reply,

Set_Config, Get_Config_Request,

Get_Config_Reply, Multipart_Request,

Multipart_Reply, Packet_In, Flow_Mode

and FLow_Removed packets with the

OpenFlow 1.3 protocol were all captured.

OpenFlow protocol header for the packets

all included version 0x04 which is defined

as the OpenFlow 1.3 protocol.

The Huawei Agile Controller also used LLDP

to discover the topology of the switches.

Tolly engineers verified that the topology

map in the Huawei Agile Controller shows

the network topology and topology

change accurately.

SDN - Multiple Flow Tables with

Third Party SDN Controllers

Tolly engineers verified that the Huawei

CE6850 and CE7850 could be controlled by

the third party SDN controller - Ryu SDN

Framework (http://osrg.github.io/ryu/) 3.8.

Engineers used Wireshark to capture traffic.

Hello, Features_Request, Features_Reply,

Set_Config, Get_Config_Request,

Get_Config_Reply, Multipart_Request,

Multipart_Reply, Packet_In, Flow_Mode

and FLow_Removed packets with the

OpenFlow 1.3 protocol were all verified

between the Ryu and the CloudEngine

switches under test.

After traffic was passed to the CE6850 and

CE7850 switches, the Ryu controller

advertised multiple flow tables to the

Huawei CE6850 and CE7850 switches with

Flow_Mod packets. Engineers verified that

the flow tables were successfully applied to

the switches.

SDN - Layer 2 and Layer 3 Line-rate

Forwarding

Traditional Layer 2 and Layer 3 forwarding

are based on MAC and FIB tables. When

managed the Huawei Agile Controller, the

CloudEngine switches can forward traffic

by using the flow tables applied from the

SDN controller instead of using MAC and

FIB tables.

When a switch receives traffic, it passes the

traffic to the controller. The controller learns

the MAC and IP addresses of the traffic and

uses a shortest path algorithm to calculate

the Layer 2 and Layer 3 forwarding paths

with the network topology it discovered.

For Layer 2 forwarding, the controller then

applies flow tables with the proper Output

command to each switch. So the switches

know how to forward the traffic. For Layer 3

forwarding, the controller applies flow

tables with the Output command as well as

as Set filed and Decrease TTL commands to

let the switches know the forwarding path

and change the MAC addresses and the

TTL value of the traffic.

Tolly engineers verified the procedure

above using two Huawei CE6850 and one

Huawei CE7850 switches with the Huawei

Agile Controller.

Line-rate traffic was used with 10GbE ports

on the switches to test. In both the Layer 2

forwarding test and Layer 3 forwarding

test, there was no frameloss with 10Gbps

128-byte frames.

SDN - Policy-based Routing

Tolly engineers verified that along with the

shortest path algorithm used by the

controller, certain policies can be defined.

In the test, engineers defined that traffic

from and to certain IPs must go through

one specific switch. Then all the matched

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 11 of 13Tolly.com

Source: Tolly, May 2014 Figure 6

Huawei CloudEngine Switches SDN Test Bed

traffic went through that switch even

though the path was not the shortest one.

Other traffic still followed the shortest

paths.

SDN - Dynamic Traffic Engineering

The Huawei Agile Controller can control the

switches to adjust the forwarding paths

dynamically according to the traffic load.

Network topology is shown in Figure 6. All

hosts were simulated by Spirent TestCenter.

The traffic between the two upper hosts

(traffic in purple and brown) had higher

priority. The traffic between the two lower

hosts (traffic in blue and green) had lower

priority.

The rate threshold was set to be 8Gbps to

fall back to the backup path. The traffic

between the upper two hosts was with

bidirectional 4Gbps. The traffic between

the lower two hosts was increased from

0Gbps to 10Gbps and then fell back to

3Gbps granularly.

Tolly engineers verified that, when traffic

between the lower two hosts reached

4Gbps, the traffic path changed to the

backup path calculated by the controller as

shown in Figure 6. When the traffic

dropped to less than 4Gbps, the traffic path

went back to be the shortest path.

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 12 of 13Tolly.com

Source: Tolly, May 2014 Table 5

Devices Under Test

CE7850-32Q-EI CloudEngine 7850EI V100R003C00SPC600

CE6850-48S6Q-HI CloudEngine 6850HI V100R003C00SPC600

C6850-48S4Q-EI CloudEngine 6850EI V100R003C00SPC600

CE6810-48S4Q-EI CloudEngine 6810EI V100R003C00SPC600

CE5850-48T4S2Q-HI CloudEngine 5850HI V100R003C00SPC600

CE5810-48T4S-EI CloudEngine 5810EI V100R003C00SPC600

CE5810-24T4S-EI CloudEngine 5810EI V100R003C00SPC600

Huawei CloudEngine 7800/6800/5800 Series Data Center Switches #214120

© 2014 Tolly Enterprises, LLC Page 13 of 13Tolly.com

About Tolly

The Tolly Group companies have been delivering world-class IT services for more than 25 years. Tolly is a leading global provider of third-party validation services for vendors of IT products, components and services.

You can reach the company by E-mail at

[email protected], or by telephone at

+1 561.391.5610.

Visit Tolly on the Internet at:

http://www.tolly.com

214120 ivcofs3 yx-2015-02-02-VerK

Terms of Usage

This document is provided, free-of-charge, to help you understand whether a given product, technology or service merits additional investigation for your particular needs. Any decision to purchase a product must be based on your own assessment of suitability based on your needs. The document should never be used as a substitute for advice from a qualified IT or business professional. This evaluation was focused on illustrating specific features and/or performance of the product(s) and was conducted under controlled, laboratory conditions. Certain tests may have been tailored to reflect performance under ideal conditions; performance may vary under real-world conditions. Users should run tests based on their own real-world scenarios to validate performance for their own networks.

Reasonable efforts were made to ensure the accuracy of the data contained herein but errors and/or oversights can occur. The test/audit documented herein may also rely on various test tools the accuracy of which is beyond our control. Furthermore, the document relies on certain representations by the sponsor that are beyond our control to verify. Among these is that the software/hardware tested is production or production track and is, or will be, available in equivalent or better form to commercial customers. Accordingly, this document is provided "as is," and Tolly Enterprises, LLC (Tolly) gives no warranty, representation or undertaking, whether express or implied, and accepts no legal responsibility, whether direct or indirect, for the accuracy, completeness, usefulness or suitability of any information contained herein. By reviewing this document, you agree that your use of any information contained herein is at your own risk, and you accept all risks and responsibility for losses, damages, costs and other consequences resulting directly or indirectly from any information or material available on it. Tolly is not responsible for, and you agree to hold Tolly and its related affiliates harmless from any loss, harm, injury or damage resulting from or arising out of your use of or reliance on any of the information provided herein.

Tolly makes no claim as to whether any product or company described herein is suitable for investment. You should obtain your own independent professional advice, whether legal, accounting or otherwise, before proceeding with any investment or project related to any information, products or companies described herein. When foreign translations exist, the English document is considered authoritative. To assure accuracy, only use documents downloaded directly from Tolly.com.  No part of any document may be reproduced, in whole or in part, without the specific written permission of Tolly. All trademarks used in the document are owned by their respective owners. You agree not to use any trademark in or as the whole or part of your own trademarks in connection with any activities, products or services which are not ours, or in a manner which may be confusing, misleading or deceptive or in a manner that disparages us or our information, projects or developments.

Test Equipment SummaryThe Tolly Group gratefully acknowledges the providers

of test equipment/software used in this project.

Vendor Product Web

IxiaXM12 Chassis

IxNetwork 7.22.9.9.9EAhttp://www.ixiacom.com

Spirent

HWS-11U-KIT Chassis

TestCenter v3.95

TestCenter v4.20.0576.0000 http://www.spirent.com