14
DPACC vNF Overview and Proposed methods Keith Wiles [email protected] 1 2015.04.03 – v0.5

DPACC vNF Overview and Proposed methods Keith Wiles [email protected] 1 2015.04.03 – v0.5

Embed Size (px)

Citation preview

Page 1: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

DPACC vNF Overview and Proposed methods

Keith Wiles

[email protected]

1

2015.04.03 – v0.5

Page 2: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Data Plane Acceleration Overview

•What are the goals of DPACC?

•Identify NFV use cases to illustrate Data Plane Acceleration and proposal solution

•Create a common and performance oriented model

•Agree on the method(s) to move control/data between guest (vNF) and host with acceleration support

•Provide a clean standard solution for vNF deployments

•Create a common framework from which all NFVs can utilize

•Suggest a solution(s) to OPNFV as a document and PoC

2

Proposal to OPNFV (overview)

Page 3: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Data Plane Acceleration Goals (abstracted from Wiki)

•The project is to specify a general framework for vNF Data Plane Acceleration (DPA or DPACC).

•Including a common suite of abstract APIs at various OPNFV interfaces

•Enable vNF portability and resource management across various underlying integrated SOCs that may include HW accelerators

•It is desirable to design such DPA API framework to easily fit underneath existing prevalent APIs (e.g. sockets) for legacy designs

•The framework should not dictate what APIs an application must use, rather recognizing the API abstraction is likely a layered approach and developers can decide which layer to access directly

•My personal goal is to define a performance oriented solution that is scalable

3

Proposal to OPNFV

Page 4: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Fixed Service Chaining

4

• Simple fixed service chaining

– A concept view of service chaining from a very high level

– Packet flow is moving from the left to right and back again

– Packets are moving between VMs, which a big part of service chaining

• A fixed model is not very flexible or scalable

• A few questions we need to answer

– What is the process of VM to external world at the packet level?

– What is the method of moving packets between VMs?

– How do we get performance and a scalable solution?

Page 5: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

VM Basic Drivers (direct access)

5

• Drivers in the Virtual Machine• Each VM must have a set of drivers to access

the device registers

– A vNF needs to have a number of drivers to support all of the devices (not scalable)

– Registers must be mapped into the VM

– The drivers are normally generic and used for general network traffic

– A general network traffic driver, most likely means the driver is interrupt driven

• These type of drivers maybe very slow compare to other methods (e.g. Poll Mode Drivers)

• Switching packet between VMs needs handling

– External device (e.g. TOR) needs to handle the extra traffic

– Can not provide any value add in the underlying layers e.g. load balance, DPI, Flow Classification, …

– Hypervisor has no ability to enforce policy e.g. QoS or Security

Page 6: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Adding a vSwitch to the design

6

• Adding a Kernel vSwitch

• Each VM uses VirtIO to communicate to the kernel vHost

• Performance is much slower due to:• Switching between User and Kernel space

• Normally requires a copy of the data to the kernel

• Kernel has many services running to interrupt the data flow

– Scheduling, timers, clock interrupts, …

• Does provide a method to switch packets between VMs

• Model is scalable and allows for better dynamic movement of packets between VMs using packet switching

• Fairly easy to configure in todays Linux Kernels

Page 7: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

User Space vSwitch

7

• Adding a User Space vSwitch

• Moving the vSwitch to user space removes some the kernel interaction for packet movement

• Easier to map memory between VMs to reduce data copies

– Does open a fault propagation path if memory is shared

– Also opens a security issue between VMs

• Faster performance and does not require kernel modifications normally, able to use a stock kernel

Page 8: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Software Acceleration Layer (SAL)

8

• Software Acceleration Layer• The SAL could handle the request in software

or in a hardware offload as needed

• Better abstraction with a SAL to isolate the drivers from vSwitch design

– vSwitch now contains no hardware layer for devices

• SAL handles all hardware/device interaction

• SAL contains drivers possible tuned to the type of packet acceleration, meaning possible Poll Mode Drivers for performance

• Allows for VMs not requiring a vSwitch to access devices faster and with less latency

• Extending VirtIO with other device types/services like crypto, dpi and others adds value to the design

– plus allowing the VM API to be a common set of APIs

– Gives VMs portability (across vendors) and longevity (as hardware changes, and the HW/SW optimization point changes)

Page 9: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

More Complex Design with Service Chaining

9

• Software Acceleration layer adds more flexibility in design

• Adding a SAL also means the SAL can be in the VM

• A VM based SAL could access devices directly (e.g. SR-IOV)

– Better performance from a VM

– Better data isolation for security or fault propagation

• Native SAL applications for even better performance, but has some limitations

– Does not allow direct VM to VM access or switching between VMs

• Provides a clean layered design and flexibility to move features around in the system

Page 10: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Have shown a number of possible options•Each solution has pros an cons to each method

– A common vNF design needs flexibility for all possible designs

•VM #0 is what we have today, but the performance is not high and can be 10x slower then other solutions

•VM #1 is one of the fastest options, but does not allow for VM to VM communication without external or device hardware

•VM #2 means native applications just work out of the box

•VM #3 is the best solution compared to the other options

– Adding possible direct VM to device access gives good performance

– Using a standard like VirtIO gives a general solution for control/data movement 10

NFV Overview

Page 11: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Have shown a number of possible options

•VM #4 is using IVSHM design for added performance, but is less standard today

•Last box allows for native SAL applications for non-VM base acceleration

•Software Acceleration makes possible additional services which can be controlled by the hypervisor.  The diagrams would probably get too complicated if we showed all the possible control paths

•One advantage of the SW Acceleration over some of the HW alternatives (especially SRIOV)

11

NFV Overview (Continued)

Page 12: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Enhancing VirtIO for better control and adding more device types:

•Need to add Crypto support to VirtIO as an acceleration feature

•Needs to support legacy VirtIO API for backward compatibility is a requirement

•Needs to support exporting the metadata needs of the vNF for acceleration and orchestration

•Enhance performance is a requirement for the solution

•Adding a enhancements to VirtIO allows for faster adoption

Software Acceleration Layer enhancements:

•Help locate/find hardware/software acceleration device for the vNF, if required

•Help enhance support for orchestration layer along with the VIM for pluming the data flow

•Managing a vNF around a life cycle is a requirement

•SAL acceleration needs to support a number of software accelerators as well as hardware ones 12

VirtIO and SAL Enhancements

Page 13: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Summary of options and solutions (not the full list)

•Providing a layered approach gives the best flexibility and scalability

•A software acceleration layer enhances performance in the guest as well as in the host system

– SAL helps bind the vNFs into a single solution that scales

•Able to have different SALs in the VMs from the host SAL

•vSwitch gives a clean and simple VM to VM communication– Able to swap out vSwitch designs containing different features for a give solution

•The solutions forces open source and the use of standard open APIs– DPDK and Open vSwitch are two possible solutions for DPACC, other SAL and vSwitch solutions are

reasonable

•Many details need to be considered to maintain an open solution as any single closed solution is not portable, flexible or scalable from a community point of view

•Enhancing VirtIO is the best standards based direction to get adoption while supporting others

•Enhancing Open vSwitch also will gain adoption as performance increases

•Adding more support into VirtIO or SAL for Orchestration layer is a must 13

Proposal Summary

Page 14: DPACC vNF Overview and Proposed methods Keith Wiles Keith.Wiles@Intel.com 1 2015.04.03 – v0.5

Thank You

14