4
Competitive Brief Marvell FastLinQ 41000 Series 10/25GbE NICs Outperform, Outsmart and Outlast Intel 700 Series (X520, X710, XXV710) NICs SN0530936-00 Rev. E 01/20 1 KEY BENEFITS Leadership in 25GbE: With over a year’s lead in 25GbE introduction vs. Intel, Marvell 10/25GbE NICs are a top choice of datacenters worldwide Accelerate access to NVMe: Concurrent RoCE and iWARP RDMA delivers flexibility and choice in accelerating Virtual Machines and access to NVMe Storage. Intel currently does not offer any RDMA support for 10/25GbE NICs Scale Virtual Workloads: Marvell FastLinQ enables 50% more virtual functions vs. Intel, enabling workloads to scale to their maximum potential QoS for Hypervisor Infrastructure Traffic: Marvell NIC partitioning delives fine-grained QoS for defining guarantee and limits, enabling customers to meet SLAs. Intel NICs do not offer QoS for virtual infrastructure traffic. Increase Virtual Machine Density: Full offloads for Storage protocols, free up server CPU cycles to host more VMs and monetize on application performance. Intel 700 Series NICs are not capable of iSCSI and FCoE offloads. Lowers OpEx: Advanced web-based NIC management with cross platform capabilities lowers costs and accelerates problem identification. Intel’s NIC management offerings are not as comprehensive as Marvell. FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel X520, X710, XXV722 EXECUTIVE SUMMARY In the era of hybrid cloud and DevOps, IT departments are under tremendous pressure to move faster and be more agile. Traditional architectures which rely on the sheer compute capability of server infrastructure and driven by complex manual processes cannot deliver the power, simplicity, and velocity that is necessary to meet the demands of the modern datacenter. Ethernet forms the networking backbone connecting customers and systems together, and networking requirements have grown exponentially as these systems scale and adopt NVMe while workload density continues to increase. The success of the modern enterprise infrastructure requires a resilient and scalable network architecture built on “intelligent” and “high performance” Ethernet NICs that can offload the server CPU, enabling customers to leverage and monetize the compute complex. Marvell FastLinQ NICs align to the requirements of hybrid cloud and DevOps driven IT by delivering high-speed, flexible and offloaded I/O solutions that Outperform, Outsmart and Outshine Intel 700 Series 10/25GbE NICs that rely on host CPU cycles to serve networking I/O. This competitive brief highlights important benefits for Marvell FastLinQ 41000 Series 10/25GbE NICs vs. Intel X710, XXV722 and X550 based NICs. Intel OmniPath technology is not covered in this competitive brief as unlike Ethernet it is based on a proprietary technology focused at specific HPC workloads.

Marvell Technology Group - FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel ... · 2020-02-13 · Intel OmniPath technology is not covered in this competitive brief as unlike

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Marvell Technology Group - FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel ... · 2020-02-13 · Intel OmniPath technology is not covered in this competitive brief as unlike

Competitive Brief

Marvell FastLinQ 41000 Series 10/25GbE NICs Outperform, Outsmart and Outlast Intel 700 Series (X520, X710, XXV710) NICs

SN0530936-00 Rev. E 01/20 1

KEY BENEFITS

Leadership in 25GbE: With over a year’s lead in 25GbE introduction vs. Intel, Marvell 10/25GbE NICs are a top choice of datacenters worldwide

Accelerate access to NVMe: Concurrent RoCE and iWARP RDMA delivers flexibility and choice in accelerating Virtual Machines and access to NVMe Storage. Intel currently does not offer any RDMA support for 10/25GbE NICs

Scale Virtual Workloads: Marvell FastLinQ enables 50% more virtual functions vs. Intel, enabling workloads to scale to their maximum potential

QoS for Hypervisor Infrastructure Traffic: Marvell NIC partitioning delives fine-grained QoS for defining guarantee and limits, enabling customers to meet SLAs. Intel NICs do not offer QoS for virtual infrastructure traffic.

Increase Virtual Machine Density: Full offloads for Storage protocols, free up server CPU cycles to host more VMs and monetize on application performance. Intel 700 Series NICs are not capable of iSCSI and FCoE offloads.

Lowers OpEx: Advanced web-based NIC management with cross platform capabilities lowers costs and accelerates problem identification. Intel’s NIC management offerings are not as comprehensive as Marvell.

FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel X520, X710, XXV722

EXECUTIVE SUMMARY

In the era of hybrid cloud and DevOps, IT departments are under tremendous pressure to move faster and be more agile. Traditional architectures which rely on the sheer compute capability of server infrastructure and driven by complex manual processes cannot deliver the power, simplicity, and velocity that is necessary to meet the demands of the modern datacenter. Ethernet forms the networking backbone connecting customers and systems together, and networking requirements have grown exponentially as these systems scale and adopt NVMe while workload density continues to increase. The success of the modern enterprise infrastructure requires a resilient and scalable network architecture built on “intelligent” and “high performance” Ethernet NICs that can offload the server CPU, enabling customers to leverage and monetize the compute complex.

Marvell FastLinQ NICs align to the requirements of hybrid cloud and DevOps driven IT by delivering high-speed, flexible and offloaded I/O solutions that Outperform, Outsmart and Outshine Intel 700 Series 10/25GbE NICs that rely on host CPU cycles to serve networking I/O.

This competitive brief highlights important benefits for Marvell FastLinQ 41000 Series 10/25GbE NICs vs. Intel X710, XXV722 and X550 based NICs. Intel OmniPath technology is not covered in this competitive brief as unlike Ethernet it is based on a proprietary technology focused at specific HPC workloads.

Page 2: Marvell Technology Group - FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel ... · 2020-02-13 · Intel OmniPath technology is not covered in this competitive brief as unlike

FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Competitive Brief Intel X520, X710, XXV722

Competitive Brief

SN0530936-00 Rev. E 01/20 2

LEADERSHIP IN 25GbE ACCELERATE ACCESS TO NVME WITH RDMA

According to a recent report from Crehan Research Inc., 25GbE customer adoption is off to a much stronger start than either 10GbE or 40GbE [1]. In fact, 25GbE shipments have handily surpassed two hundred thousand ports in just a little over a year, a milestone that took 10GbE about six years to reach and 40GbE about four years. [Figure 1]

Marvell introduced its first generation FastLinQ 10/25GbE NICs to early customers in 2015 and announced broad adoption in 2016, more than a year before Intel introduced their X700 based 10/25GbE NICs. Now shipping its second generation of 10/25GbE NICs – FastLinQ 41000 Series Marvell NICs are amongst the top choices of datacenters worldwide.

In addition to first to market, Marvell FastLinQ 41000 Series 10/25GbE NICs deliver a true 25GbE Interface straight from the NIC controller, while Intel 25GbE NICs (XXV710) re-purpose previous generation 40GbE technology and use a gear box (additional hardware, more components, more points of failure) to convert 40GbE to 25GbE [Figure 2]. Intel’s 25GbE design has the potential to impact the reliability of its 25GbE NICs.

Non-volatile memory express (NVMe) technology has made significant in-roads within the storage industry over the past few years and is now recognized as a streamlined, high-performance, low latency, and low overhead interface for solid-state drives (SSDs). Considering the ubiquity of Ethernet, low latency RDMA fabrics like RoCE a nd iWARP are ideally suited for extending the reach of NVMe and enable the full utilization of datacenter investments in NVMe. Marvell FastLinQ Universal RDMA technology enables concurrent RoCE and iWARP RDMA. This delivers flexibility and choice in accelerating Virtual Machines and access to NVMe Storage. Intel does not offer any RDMA support for 25GbE NICs and only iWARP support for 10GbE NICs on limited platforms

Figure 1: 25GbE Adoption faster than 10GbE and 40GbE, per Crehan Research

Figure 2: Marvell FastLinQ 25GbE is a true 25GbE Design with fewer discreet components. Source: Intel XXV722 Product Documenation

Table 1: FastLinQ NIC with Universal RDMA enables multiple datacenter use cases which are not available when deploying Intel 700 Series NICs

RDMA Use Case

Application

Mavell FastLinQ

41000 Series

Intel X520, X710,

XVV710

Marvell Advantage

Storage Spaces

Direct (S2D)

Hyper Converged

Infrastructure with Windows

Server 2016

Marvell FastLinQ

is the industry’s

only 10/25GbENIC that enables

Universal RDMA

Concurrent RoCE

and iWARP

SMB Direct (SMBD)

Storage Accelerationfor VMs in Windows

Server

AcceleratedWindows

ServerLive

Migration

Offload andAccelerate workload

migrations

NVMe overFabrics

Scale out NVMe

across the datacenter

VMware ESXi

PVRDMA

Low Latency Virtual

Machine HPCWorkloads

iSERAccelerate iSCSI with

RDMA

[1] http://press-releases.media/early-25gbe-customer-adoption-already-much-faster-than-it-was-for-10gbe-or-40gbe-reports-crehan-research

Page 3: Marvell Technology Group - FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel ... · 2020-02-13 · Intel OmniPath technology is not covered in this competitive brief as unlike

FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Competitive Brief Intel X520, X710, XXV722

Competitive Brief

SN0530936-00 Rev. E 01/20 3

Figure 3 : FastLinQ NICs with RDMA (RoCE or iWARP) Accelerate VM

Migration vs. NIC without RDMA. Source: Marvell Internal Testing

Figure 4: Reduce Server CPU Utilization with Full iSCSI offload vs. Intel Software

only approach. Source: Marvell Internal Testing

SCALE VIRTUAL WORKLOADS

INCREASE VIRTUAL MACHINE DENSITY

LOWERS OPEX

QOS FOR HYPERVISOR INFRASTRUCTURE TRAFFIC

The SR-IOV specification details how a single NIC can be shared between various guest operating systems—the VMs. SR-IOV provides direct VM connectivity and isolation across VMs. It allows the data to bypass the software virtual switch (vSwitch) and provides near-native performance. The benefits to deploying hardware-based SR- IOV-enabled NICs include reduction of CPU and memory usage compared to vSwitches. Devices capable of SR-IOV functionality support multiple virtual functions on top of the physical function, where a single VM is assigned one or more virtual functions. Marvell FastLinQ 41000 Series enables 50% more virtual functions vs. Intel enabling workloads to scale to their maximum potential.

The significant increase in bandwidth provided by 10/25GbE networks in conjunction with the advent of NVMe based storage can also be used to support Ethernet protocols for storage area networks (SANs). iSCSI and Fibre Channel over Ethernet (FCoE) are the predominant storage protocols for Ethernet. Marvell FastLinQ 41000 Series Converged Network Adapters implement a full hardware offload for iSCSI and FCoE, while Intel 10/25GbE NICs rely on the x86 CPU (using software initiators) to provide storage access over Ethernet.

Full offloads for Storage protocols in the Marvell FastLinQ 41000 Series CNAs free up server CPU cycles [Figure 4] to host more VMs and monetize on application performance. Intel 700 Series NIC are not capable of iSCSI and FCoE offloads.

It is well known that ongoing operating expense (OPEX) can be a much greater cost factor than the original capital expense (CAPEX). With that in mind, data center and network managers want the option to remotely manage adapters from a centralized management console. This helps to reduce OPEX and is also critical to ensure network consistency.

As a starting point, Marvell FastLinQ 41000 Series NICs are fully integrated with baseline OS network management utilities. That capability is greatly enhanced with the powerful Marvell QConvergeConsole® (QCC) graphical user interface (GUI), PowerKit, vCenter Plug-in and CLI, that enables administration of all Marvell Ethernet and Fibre Channel adapters throughout the data center from a single console locally or remotely on Linux, Windows and VMware ESXi. Intel adapters can only be managed with baseline OS utilities and provide very limited options for remote multi-adapter management.

The Marvell FastLinQ 41000 Series Adapter NIC partitioning (NPAR) technology is driving next-generation server I/O virtualization by allowing the seamless migration from 1GbE to 10/25GbE while preserving the bandwidth segregation that 1GbE brought to virtual server environments. NPAR or switch independent Network Partitioning divides a single physical 10/25GbE Ethernet port into multiple (up to 8) physical functions or partitions using a flexible bandwidth capacity allocation. The most common use case of NPAR is for managing virtual server infrastructure traffic, where a typical deployment would assign a fixed bandwidth to guarantee the performance of live VM migration or vMotion, Fault Tolerance (FT), Virtual Machine network and management traffic.

The Marvell implementation of NIC partitioning can also be used concurrently with single root I/O virtualization (SR-IOV). This unique capability can reduce the number of adapter ports needed to support failover and load balancing on a virtualized host that is using SR-IOV.

In comparison, Intel NICs do not support NPAR, but support Flexible Port Partitioning (FPP), which is a non-standard extension of SR-IOV and limited to Linux only deployments.

Page 4: Marvell Technology Group - FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Intel ... · 2020-02-13 · Intel OmniPath technology is not covered in this competitive brief as unlike

FastLinQ® 41000 Series 10/25GbE Ethernet NICs vs. Competitive Brief Intel X520, X710, XXV722

Competitive Brief

SN0530936-00 Rev. E 01/20 4

ABOUT MARVELL: Marvell first revolutionized the digital storage industry by moving information at speeds never thought possible. Today, that same breakthrough innovation remains at the heart of the company’s storage, processing, networking, security and connectivity solutions. With leading intellectual property and deep system-level knowledge, Marvell semiconductor solutions continue to transform the enterprise, cloud, automotive, industrial, and consumer markets. For more information, visit www.marvell.com.

Copyright © 2020 Marvell. All rights reserved. Marvell and the Marvell logo are registered trademarks of Marvell. All other trademarks are the property of their respective owners.

SUMMARY

The Table below provides a feature by feature comparison between Marvell FastLinQ and Intel NICs. Marvell FastLinQ 41000 Series adapters deliver key benefits that ensure maximum value for 10/25GbE deployments.

Table 2: Marvell FastLinQ Advantages vs. Intel NICs

LEARN MORE: https://www.marvell.com

Introduction to Marvell FasLinQ:

What is Universal RDMA:

Capabilities

Mavell FastLinQ

41000Series

Intel 700

SeriesMarvell Advantage

10/25GbE Speeds

Marvell FastLinQ Family supports 50/100GbE

NICs also

25GbE NIC Introduction

Early 2016Over a yearlater

Marvell 10/25GbE NICs are a top choice of

datacenters worldwide

NIC Partitioning

(NPAR)

With over a decade of NPAR customer

deployments, FastLinQ NPAR delivers QoS for

hypervisor infrastructure workloads

Single Root IOV

(SR-IOV)

192VFs

128VFs

50% more VFs. In addition FastLinQ offers

concurrent NPAR and SR-IOV

Universal RDMA

RoCE and iWARP

LimitediWARP support

RDMA NICs enable optimal utilization of server investments in

NVMe

Full Storage Offloads

iSCSI and FCoE

Save CPU cycles, host more VMs, higher

application performance

Cross Platform NIC Management

GUI and CLI

Web-based cross platform tools, vCenter

Plug-in, PowerShell Toolkit