View
266
Download
5
Category
Preview:
Citation preview
1Copyright©2017 NTT corp. All Rights Reserved.
Software Stacks to enable Software-Defined Networking and Network Functions VirtualizationYoshihiro Nakajima <nakajima.yoshihiro@lab.ntt.co.jp, ynaka@lagopus.org>NTT Network Innovation Laboratories
2Copyright©2017 NTT corp. All Rights Reserved.
Yoshihiro NakajimaWork for Nippon Telegraph and Telephone Corporation,
R&D DivisionNetwork Innovation Laboratories
Project lead of Lagopus SDN software switchBackground
• High performance computing• High performance networking and data processsing
About me
3Copyright©2017 NTT corp. All Rights Reserved.
TrendsSoftware-Defined Networking (SDN)Network Functions Virtualization (NFV)
DataplaneLagopus: SDN/OpenFlow software switch
• Overview and NFV• Trials
Controller/C-planeO3 projectRyu and gobgpZebra2
Future plan
Agenda
4Copyright©2017 NTT corp. All Rights Reserved.
Network is so stuck….
Many technologies has emerged in system development areaLanguage, Debugger, Testing framework, Continuous Integration,
Continuous Deployment….
Network area …CLI with vender format CLI with serial or telnet Expect script
Good for Human,But not for software
5Copyright©2017 NTT corp. All Rights Reserved.
Trend shift in networking
Closed (Vender lock-in)
Yearly dev cycle
Waterfall dev
Standardization
Protocol
Special purpose HW / appliance
Distributed cntrl
Custom ASIC / FPGA
Open (lock-in free)
Monthly dev cycle
Agile dev
DE fact standard
API
Commodity HW/ Server
Logically centralized cntrl
Merchant Chip
6Copyright©2017 NTT corp. All Rights Reserved.
What is Software-Defined Networking?6
Innovate services and applications in software development speed!
Reference: http://opennetsummit.org/talks/ONS2012/pitt-mon-ons.pdf
Decouple control plane and data plane→Free control plane out of the box( APIs: OpenFlow, P4, …)
Logically centralized view→Hide and abstract complexity of networks, provide entire view of the network
Programmability via abstraction layer→Enables flexible and rapid service/application development
SDN conceptual model
7Copyright©2017 NTT corp. All Rights Reserved.
Why SDN?7
Differentiate services (Innovation)• Increase user experience• Provide unique service which is not in the market
Time-to-Market( Velocity)•Not to depend on vendors’ feature roadmap•Develop necessary feature when you need
Cost-efficiency•Reduce OPEX by automated workflows•Reduce CAPEX by COTS hardware
8Copyright©2017 NTT corp. All Rights Reserved.
Open-Source Cloud Computing
Open Northbound APIs
Open-Source Controller
Open-Source Hardware
Open Southbound Protocol
Open-Source Switch Software
Infrastructure Layer
Application Layer
Business Applications
Control LayerNetwork Services
Network Services
API API API
v
v v v
Open source network stacks for SDN/NFV
FBOSS
9Copyright©2017 NTT corp. All Rights Reserved.
SDN/NFV history
201020092008 2011 2012 2013 OpenFlow deveopment
Open Networking Foundation
ETSI NFV
OpenDaylight
OpenFlow Switch Consortium
Stanford UniversityClean Slate Program
NFV
SDNactivity
Standarlization OpenFlow
2014
trema RyuOpenDaylight
OF1.1 OF1.2OpenFlowprotocols OF1.4OF 0.8 OF 0.9 OF 1.0
NOXLagopus
10Copyright©2017 NTT corp. All Rights Reserved.
History of software vswitch/router and I/O library1995 2000 2005 2010 2015
DPDKR1.0 OSS LF join
2012 BT vBRAS demo
Research phaseInternal product phaseOSS phase
netmap
XDP
ClickBESS
OVSNicira OVN
Lagopus
VPP
11Copyright©2017 NTT corp. All Rights Reserved.
Network Functions Virtualization
Replace dedicated network nodes to virtual applianceRuns on general servers
Leverage cloud provisioning andmonitoring system for managementVirtual network function (VNF) may be run on
virtual machine or baremetal server
12Copyright©2017 NTT corp. All Rights Reserved.12
Evaluate the benefits of SDN by implementing our control plane and
switch
13Copyright©2017 NTT corp. All Rights Reserved.
High performance network I/O for all packet sizesEspecially in smaller packet size (< 256 bytes)
Low-latency and less-jitterNetwork I/O & Packet processing
IsolationPerformance isolation between NFV VMs Security-related VM-to-VM isolation from untrusted apps
Reliability, availability and serviceability (RAS) function for long-term operation
NFV requirements from 30,000 feet
14Copyright©2017 NTT corp. All Rights Reserved.
Still poor performance of NFV apps Lower network I/O performanceBig processing latency and big jitter
Limited deployment flexibility SR-IOV has limitation in performance and configurationCombination of DPDK apps on guest VM and DPDK-enabled vSwitch
is configuration Limited operational support
DPDK is good for performance, but has limited dynamic reconfiguration
Maintenance features are not realized
What’s matter in NFV
15Copyright©2017 NTT corp. All Rights Reserved.
Not enough performance Packet processing speed < 1Gbps 10K flow entry add > two hours Flow management processing is too heavy
Develop & extension difficulties Many abstraction layer cause confuse for me
• Interface abstraction, switch abstraction, protocol abstraction Packet abstraction…
Many packet processing codes exists for the same processing • Userspace, Kernelspace,…
Invisible flow entries cause chaos debugging
Existing vswitch is @2013
16Copyright©2017 NTT corp. All Rights Reserved.
vSwitch requirement from user side
Run on the commodity PC server and NIC Provide a gateway function to allow connect different
various network domainsSupport of packet frame type in DC, IP-VPN, MPLS and access NW
Achieve 10Gbps-wire rate with >= 1M flow ruleslow-latency packet processingflexible flow lookup using multiple-tablesHigh performance flow rule setup/delete
Run in userland and decrease tight-dependency to OS kerneleasy software upgrade and deployment
Support various management and configuration protocols.
17Copyright©2017 NTT corp. All Rights Reserved.
Lagopus: High-performance SDN/OpenFlow Software Switch
18Copyright©2017 NTT corp. All Rights Reserved.
Goal of Lagopus project
Provide NFV/SDN-aware switch software stackProvide dataplane API with OpenFlow protocol and gRPC100Gbps-capable high-performance software dataplaneDPDK extension for carrier requirementsCloud middleware adaptation
Expand software-based packet processing to carrier networks
19Copyright©2017 NTT corp. All Rights Reserved.
Lagopus is a small genus of birds in the grouse subfamily, commonly known as ptarmigans. All living in tundra or cold upland areas.Reference: http://en.wikipedia.org/wiki/Lagopus
What is Lagopus (雷鳥属 )?
© Alpsdake 2013© Jan Frode Haugseth 2010
20Copyright©2017 NTT corp. All Rights Reserved.
Provide High performance software switch on Intel CPUOver-100Gbps wire-rate packet processing / portHigh-scalable flows handling
Expands SDN idea to many network domain Datacenter, NFV environment, mobile networkVarious management /configuration interfaces
Target of Lagopus switch
TOR
Virtual Switch
Hypervisor
VM VM
Virtual Switch
Hypervisor
NFV NFV
Virtual Switch
Hypervisor
VM VM
Gateway CPE
Data Center Wide-area Network Access Network Intranet
21Copyright©2017 NTT corp. All Rights Reserved.
Open Source High performance SDN software switchMulticore-CPU-aware packet processing with DPDKSupports NFV environmentRuns on Linux and FreeBSD
Best OpenFlow 1.3 compliant software switch by Ryu certificationMany protocol frame matches and actions support
• Ethernet, VLAN, MPLS, PBB, IPv4, IPv6, TCP, UDP, VxLAN, GRE, GTPMultiple-Flow table, Group table, meter table1M flow entries handling (4K flow mod/sec)Over-40Gbps-class packet processing (20MPPS)
Open source under Apache v2 licensehttp://lagopus.github.io/
What is Lagopus SDN software switch
22Copyright©2017 NTT corp. All Rights Reserved.
0 256 512 768 1024 12800
2,000,0004,000,0006,000,0008,000,000
10,000,00012,000,00014,000,00016,000,000
Packet size (Byte)
# o
f pac
kets
per
sec
onds
How many packets to be proceeded for 10Gbps
Short packet 64Byte14.88 MPPS, 67.2 ns• 2Ghz: 134 clocks• 3Ghz: 201 clocks
Computer packet 1KByte1.2MPPS, 835 ns• 2Ghz: 1670 clocks• 3Ghz: 2505 clocks
L1 cache access: 4 clocksL2 cache access: 12 clocksL3 cache access: 44 clocksMain memory: 100 clocksMax PPS between cores: 20MPPS
23Copyright©2017 NTT corp. All Rights Reserved.
PC architecture and limitation
NIC
CPU CPUMemory Memory
NIC
NICNIC
QPI
PCI-Exp PCI-Exp
Reference: supermicro X9DAi
24Copyright©2017 NTT corp. All Rights Reserved.
L2 forwarding
IP routing
L2-L4 classification
TCP termination
Simple OF processing
DPI
0 1000 2000 3000 4000 5000 6000# of required CPU cycles
# of CPU cycle for typical packet processing
10Gbps 1Gbps
25Copyright©2017 NTT corp. All Rights Reserved.
Simple is better for everythingPacket processingProtocol handling
Straight forward approachFull scratch (No use of existing vSwitch code)
User land packet processing as much as possible, keep kernel code smallKernel module update is hard for operation
Every component & algorithm can be replaced
Approach for vSwitch development
26Copyright©2017 NTT corp. All Rights Reserved.
What is Lagopus vSwitch
switch configuration datastore(config/stats API, SW DSL)
None-DPDK NIC DPDK NIC/vNIC
DPDK libs/PMD driver
Lagopus soft dataplane
flow lookup flow cache
OpenFlow pipeline
queue/policer
Flow tableFlow table
flow tableFlow table
Flow tableGrouptable
Flow tableFlow tablemetertable
switch HAL
OpenFlow 1.3 agent
JSON IF
SNMP
CLI
CLI
JSON
OS NW
stackAgent
SDN switch Agent• Full OpenFlow 1.3 support• Controller-less basic L2 and
L3 support with action_normal
SDN-aware management API• JSON-based control• Ansible support
DPDK-enabled OpenFlow-aware software dataplane•Over-10-Gbps performance•Low latency packet processing•high performance multi-layer flow lookup•Cuckoo hash for flow cache
Switch configuration datastore• Pub/sub mechanism• Switch config DSL• JSON-based control
OS NIFVarious I/O support• DPDK-enabled NIC• Standard NIC with raw socket• tap
Virtualization support• QEMU/KVM• Vhost-user• DPDK-enabled VNF
27Copyright©2017 NTT corp. All Rights Reserved.
General packet processing on UNIX
NIC
skb_buf
Ethernet Driver API
Socket API
vswitch
packet buffer
Data plane
User-space implementation(Event-triggered)
1. Interrupt& DMA
2. system call (read)
Userspace
Kernel space
Driver
4. DMA
3. system call (write)
Kernel-space implementation(Event-triggered)
NIC
skb_buf
Ethernet Driver API
Socket API
vswitch
packet buffer
1. Interrupt& DMA
vswitchData plane
agentagent
2. DMA
Contexts switch
MassiveInterrupt
Many memory copy / read
28Copyright©2017 NTT corp. All Rights Reserved.
x86 architecture-optimized data-plane library and NIC driversMemory structure-aware queue, buffer
managementpacket flow classificationpolling mode-based NIC driver
Low-overhead & high-speed runtime optimized with data-plane processing
Abstraction layer for hetero server environments
BSD-license
Data Plane Development Kit (DPDK)
NIC
Ethernet Driver API
Socket API
DPDK apps
packet buffer
1. DMA Write
2. DMAREAD
DPDKdataplane
DPDK apps
29Copyright©2017 NTT corp. All Rights Reserved.
What DPDK helps
30Copyright©2017 NTT corp. All Rights Reserved.
Processing bypass for speed
NIC
skb_buf
Ethernet Driver API
Socket API
vswitch
packet buffer
packet buffermemory
Standard linux application
1. Interrupt & DMA
2. system call (read)
User space
Kernel space
Driver
4. DMA
3. system call (write)
NIC
Ethernet Driver API
User-mode I/O & HAL
vswitch
packet buffer
Application with intel DPDK
1. DMA Write 2. DMA READ
DPDK Library
Polling-basepacket handling
Event-basepacket handling
31Copyright©2017 NTT corp. All Rights Reserved.
Implementation strategy for vSwitch
Massive RX interrupts handling for NIC device=> Polling-based packet receiving Heavy overhead of task switch=> Thread assignment (one thread/one physical CPU) Lower performance of PCI-Express I/O and memory
bandwidth compared with CPU=> Reduction of # of access in I/O and memory Shared data access is bottleneck between threads=> Lockless-queue, RCU, batch processing
32Copyright©2017 NTT corp. All Rights Reserved.
Basic packet processing
Network I/O RX
packet
Frame processing
Flow lookup &Action
QoS ・ Queue
Network I/OTX
Packet classification &packet distribution to buffers
Packet parsing
lookup, Header rewriteEncap/decap
Policer, ShaperMarking
packet
33Copyright©2017 NTT corp. All Rights Reserved.
What we did for performance
Network I/O RX
packet
Frame processing
Flow lookup &Action
QoS ・ Queue
Network I/OTX
packet
• Delayed packet frame evaluation
• Delayed action (processing) evaluation
• Packet batching to improve CPU $ efficiency
• Delayed flow stats evaluation
• Smart flow classification• Thread assignment optimization
• Parallel flow lookup• Lookup tree compaction• High-performance lookup
algorithm for OpenFlow(multi-layer, mask, priority-aware flow lookup)
• Flow $ mechanism
• Batch size tuning
34Copyright©2017 NTT corp. All Rights Reserved.
Exploit many core CPUsReduce data copy & move (reference access)Simple packet classifier for parallel processing in I/O RX
Decouple I/O processing and flow processingImprove D-cache efficiency
Explicit thread assign to CPU core
Packet processing using multi core CPUs
NIC 1RX
NIC 2RX
I/O RXCPU0
I/O RXCPU1
NIC 1TX
NIC 2TX
I/O TXCPU6
I/O TXCPU7
Flow lookuppacket processing
CPU2
Flow lookuppacket processing
CPU4
Flow lookuppacket processing
CPU3
Flow lookuppacket processing
CPU5
NIC 3RX
NIC 4RX
NIC 3TX
NIC 4TX
NIC RX buffer
Ring buffer
Ring buffer NIC TX buffer
35Copyright©2017 NTT corp. All Rights Reserved.
OpenFlow semantics includesMatch
• Protocol headers– Port #, Ethernet, VLAN, BPP, MAC-in-MAC, MPLS, IPv4, IPv6, ARP, ICMP, TCP,
UDP…– Mask-enabled
• PriorityAction
• Output port• Packet-in/Packet-out• Header rewrite, Header push, Header push
How OpenFlow match is so hard
36Copyright©2017 NTT corp. All Rights Reserved.
Many header match
And GRE, L2TP, OSPF… Same as TCP, SCTP L3 header continues Includes IP and
other protocols
37Copyright©2017 NTT corp. All Rights Reserved.
Linear search (first implementation)
eth_dsteth_srceth_type(ip_v, ip_hl)
ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
Is eth_type0x0800?Is ip_p17?Is udp_dst53?
0x0800?
6?
53?
packet entry 0 entry 1
Is eth_type
Is ip_p
Is tcp_dst
0x0800?
6?
80?
entry 2
Is eth_type
Is ip_p
Is tcp_dst
38Copyright©2017 NTT corp. All Rights Reserved.
0: Linear search (first implementation)
eth_dsteth_srceth_type(ip_v, ip_hl)
ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
Is eth_type0x0800?Is ip_p17?Is udp_dst53?
0x0800?
6?
53?
packet entry 0 entry 1
Is eth_type
Is ip_p
Is tcp_dst
0x0800?
6?
80?
entry 2
Is eth_type
Is ip_p
Is tcp_dst
39Copyright©2017 NTT corp. All Rights Reserved.
Simplify comparison routineFlow entry is composed by mask and valueStill linear search
1: bitmap comparison with bitmask
eth_dsteth_srceth_type(ip_v, ip_hl)
ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
eth_dsteth_src0xffff(ip_v, ip_hl)
ip_tos(ip_len)(ip_id, ip_off)
ip_ttl0xff(ip_sum)ip_srcip_dsttcp_src0xffff
eth_dsteth_src0x0800(ip_v, ip_hl)
ip_tos(ip_len)(ip_id, ip_off)
ip_ttl6(ip_sum)ip_srcip_dsttcp_src80
& ==
packet mask flow entry
40Copyright©2017 NTT corp. All Rights Reserved.
Compose search tree for dedicate fields to narrow lookup space
Then compare each fileds
2: Fixed-type search tree
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
packet
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
array
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
hash table
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
array
Linear sarch
41Copyright©2017 NTT corp. All Rights Reserved.
Scan all flow entries and compose search tree with arrangement in order from the most frequently searched field
3: Search tree with most searched field first
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
packet
(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_src
tcp_srctcp_dst
hash table
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
hash table
eth_dsteth_srceth_type(ip_v, ip_hl)ip_tos(ip_len)(ip_id, ip_off)
ip_ttlip_p(ip_sum)ip_srcip_dsttcp_srctcp_dst
hash table
eth_dsteth_srceth_type
ip_dst
Linear search
42Copyright©2017 NTT corp. All Rights Reserved.
Reduce # of lock in flow lookup tableFrequent locks are required
• Switch OpenFlow agent and counter retrieve for SNMP• Packet processing
Packet batching
Input packet
Lookup tableInput packet
Lookup table
Lock Unlock
ロック Unlock
●Naïve implementation
●Packet batching implementation
Lock is required for each packet
Lock can be reduced due to packet batching
43Copyright©2017 NTT corp. All Rights Reserved.
Reduce # of flow lookup in multiple tableGenerate composed flow hash function that contains flow tablesIntroduce flow cache for each CPU core
Bypass pipeline with flow $
43
●Naïve implementation table1 table2 table3Input packet
table1 table2 table3
Input packet
Flow cache
1. New flow
2. Success flow
Output packet
Multiple flow $generation & mngmtWrite flow $
●Lagopus implementationOutput packet
Packet
44Copyright©2017 NTT corp. All Rights Reserved.
Best OpenFlow 1.3 compliant switch
Type Action Set field Match Group Meter Total# of test scenario
(mandatory, optional)
56(3 , 53)
161(0 , 161)
714(108 , 606)
15(3 , 12)
36(0 , 36)
991(114 , 877)
Lagopus2014.11.09
59(3, 56)
161(0, 161)
714(108, 606)
15(3, 12)
34(0, 34)
980(114, 866)
OVS (kernel)2014.08.08
34(3, 31)
96(0, 96)
534(108, 426)
6(3, 3)
0(0, 0)
670(114, 556)
OVS (netdev)
2014.11.05
34(3, 31)
102(0, 102)
467(93, 374)
8(3, 5)
0(0, 0)
611(99, 556)
IVS2015.02.11
17(3, 14)
46(0, 46)
323(108, 229)
3(0, 2)
0(0, 0)
402(111, 291)
ofswitch2015.01.08
50(3, 47)
100(0, 100)
708(108, 600)
15(3, 12)
30(0, 30)
962(114, 848)
LINC2015.01.29
24(3, 21)
68(0, 68)
428(108, 320)
3(3, 0)
4(0, 4)
523(114, 409)
Trema2014.11.28
50(3, 47)
159(0 , 159)
708(108, 600)
15(3, 12)
34(0, 34)
966(114, 854)
45Copyright©2017 NTT corp. All Rights Reserved.
SummaryThroughput: 10Gbps wire-rateFlow rules: 1M flow rules
Evaluation modelsWAN-DC gateway
• MPLS-VLAN mappingL2 switch
• Mac address switching
Performance Evaluation
46Copyright©2017 NTT corp. All Rights Reserved.
Evaluation setup
Server spec. CPU: Dual Intel Xeon E5-2660
• 8 core(16 thread), 20M Cache, 2.2 GHz, 8.00GT/s QPI, Sandy bridge Memory: DDR3-1600 ECC 64GB
• Quad-channel 8x8GB Chipset: Intel C602 NIC: Intel Ethernet Converged Network Adapter X520-DA2
• Intel 82599ES, PCIe v2.0
Performance Evaluation
ServerLagopus
Flow table
tester Flows
Throughput (bps/pps/%)
Flowrule
s
Packet size
Flowcache
(on/off)
47Copyright©2017 NTT corp. All Rights Reserved.
WAN-DC Gateway
Throughput vs packet size, 1 flow, flow-cache
Throughput vs flows, 1518 bytes packet
48Copyright©2017 NTT corp. All Rights Reserved.
10000 IP subnet entries test
L3 forwarding performance with 40G NIC and
CPU E5-2667v3 3.20GHz x2Memory DDR4 64GBNIC Intel X710 x2OS Ubuntu 14.04LTSDPDK 2.2Lagopus 0.2.4 with default options
49Copyright©2017 NTT corp. All Rights Reserved.
Full OpenFlow 1.3 support Limited OpenFlow 1.5 support (Flowmod-related instructions) General tunnel encap/decap extension (EXT-382 and EXT-566) support
• GRE, VxLAN, GTP, Ethernet, IPv4, IPv6• Updated draft will be implemented soon
Flexible & high-performance dataplane Hybrid-mode support
• ACTION_NORMAL (L2, L3) Various network I/O support
• DPDK NIC, vNIC (vhost-user with virtio-net), none-DPDK NIC (raw-socket) Leverage network stack in OS kernel for OpenFlow switch
• ARP, ICMP, Routing control packet are escalated to network stack (with tap IF) Queue, Meter table
Linux, FreeBSD, NetBSD support Virtualization support for NFV
DPDK-enabled VNF on DPDK-enabled vSwitch QEMU/KVM support through virsh
Lagopus version 0.2.10
50Copyright©2017 NTT corp. All Rights Reserved.
Hands-on and seminar
51Copyright©2017 NTT corp. All Rights Reserved.
Collaboration with Lagopus
Bussiness Research institutes and networks
Software switch collaboration
White box switch collaboration
52Copyright©2017 NTT corp. All Rights Reserved.
High-performance vNIC framework for hypervisor-based NFV with userspace vSwitch
• To provide novel components to enable high-performance NFV with general propose hardware
• To provide high-performance vNIC with operation-friendly features
53Copyright©2017 NTT corp. All Rights Reserved.
Issues on NFV middleware
54Copyright©2017 NTT corp. All Rights Reserved.
Performance bottleneck in NFV with HV domain
Pkt recv / send cause
VM transition
HW emulation needs CPU cycle & VM transition
Privileged register accesses for vNIC
cause VM transition
System call cause context
switch on guest VM
VM transition: 800 CPU cycles
55Copyright©2017 NTT corp. All Rights Reserved.
Use para-virtualization NIC frameworkNo full-virtualization (emulation-based)
Global-shared memory-based packet exchangeReduce memory copy
User-space-based packet data exchangeNo kernel-userspace packet data exchange
vNIC strategy for performance & RAS
56Copyright©2017 NTT corp. All Rights Reserved.
DPDK apps or legacy apps on guest VM + userspace DPDK vSwitchConnected by shared memory-based vNICReduce OS kernel implementation
Target NFV architecture with hypervisor
Run in userspace to
avoid VM transition and context switch Memory based
packet transfer
57Copyright©2017 NTT corp. All Rights Reserved.
Existing vNIC for u-vSW and guest VM (1/2)DPDK e1000 PMD with QEMU's e1000 FV andvSwitch connected by tap
DPDK virtio-net PV PMD with QEMU virtio-net framework andvSwitch connected by tap
DPDK virtio-net PV PMD with vhost-net framework and vSwitch connected by tap
Pros: legacy and DPDK support, opposite status detectionCons: bad performance, many VM transitions, context switch
Pros: legacy and DPDK support, opposite status detectionCons: bad performance, many VM transitions, context switch
Pros: legacy and DPDK support, opposite status detectionCons: Cons: bad performance, many VM transitions, context switch
58Copyright©2017 NTT corp. All Rights Reserved.
Existing vNIC for u-vSW and guest VM (2/2)DPDK ring by QEMU IVSHMEM extension and vSwitch connected by shared memory
DPDK virtio-net PV PMD with QEMU virtio-net framework and vSwitch with DPDK vhost-user API to connect to virtio-net PMD.
Pros: Best performanceCons: only DPDK support, static configuration, no RAS
Pros: good performance, both support of legacy and DPDKCons: no status tracking of opposite device
59Copyright©2017 NTT corp. All Rights Reserved.
High performance vNIC framework for NFV
This patch has been already merged to DPDK
60Copyright©2017 NTT corp. All Rights Reserved.
High-Performance10-Gbps network I/O throughputNo virtualization transition between a guest VM and u-vSWSimultaneous support DPDK apps and DPDK u-vSW
Functionality for operationIsolation between NFV VM and u-vSWFlexible service maintenance support Link status notification on the both sides
Virtualization middleware supportSupport open source hypervisor (KVM) DPDK app and legacy app support No OS (kernel) modification on a guest VM
vNIC requirements for NFV with u-vSW
61Copyright©2017 NTT corp. All Rights Reserved.
vNIC as an extension of virtio-net frameworkPara-virtualization network interfacePacket communication by global shared memoryOne packet copy to ensure VM-to-VM isolationControl msg by inter-process-communication between pseudo
devices
vNIC deisgn
User space on host
Kernel space on host
DPDK-enabled vSwitch
Software dataplane
Guest VM
Kernel space
User space
virtio-net-compatible device
DPDK NW apps
DPDK ETHDEV/virtio-net PMD
Global shared memory
DPDK ETHDEV / PMD
pseudo PMD-enabled device
IPC-basedcontrol
communication
62Copyright©2017 NTT corp. All Rights Reserved.
Virtq-PMD driver: 4K LOC modificationVirtio-net device with DPDK extensionDPDK API and PV-based NIC (virtio-net) APIGlobal shared memory-based packet transmission on hugeTLBUNIX domain socket based control message
• Event notification (link-status, finalization)• Pooling-based the opposite device check mechanism
QEMU: 1K LOC modificationvirtio-net-ipc device on shared memory spaceShared memory-based device mapping
vNIC implementation
63Copyright©2017 NTT corp. All Rights Reserved.
Performance
64Copyright©2017 NTT corp. All Rights Reserved.
vhost application on host
Measurement point
virtio-net PMD
virtq PMD
testpmd on Guest VM
Null PMD
Null PMD
Vhost app
testpmd on host
Null PMD
Bare-metal configuration
virtq PMDMeasureme
nt point
Testpmd on host
Measurement pointpcap PMD
testpmd on Guest VM
Null PMD
Null PMD
virtq PMD
virtqueue
TAP driverVirtio-net
driver
Kernel-drivertestpmd on host
virtq PMD
Measurement point
virtio-net PMD
virtqueue
testpmd on Guest VM
Null PMD
Null PMD
virtq-pmd
micro benchmarking tool: Testpmd appsPolling-based DPDK bridge app that reads data from a NIC and
writes data to another NIC in both directions. null-PMD: a DPDK-enabled dummy PMD to allow packet generation
from memory buffer and packet discard to memory buffer
Performance benchmark
65Copyright©2017 NTT corp. All Rights Reserved.
Performance evaluationMPPSGBPS
Virtq PMD achieved great performance 62.45 Gbps (7.36 MPPS) unidirectional throughput 122.90 Gbps (14.72 MPPS) bidirectional throughput 5.7 times faster than Linux driver in 64B, 2.8 times faster than Linux drvier in 1500B
Virtq PMD achieved better performance in large packet to vhost app
66Copyright©2017 NTT corp. All Rights Reserved.
Container adaptation
67Copyright©2017 NTT corp. All Rights Reserved.
Vhost-user for container
Vhost-user compatiblePMD for containerVirtio-net-based backendShared-memory-based
packet data exchangeEvent-trigger by shared file27.27 Gbps throughtput
68Copyright©2017 NTT corp. All Rights Reserved.
Packet flowpktgen -> physical-> vswitch -> Container(L2Fwd) -> vswitch -> physical ->
pktgen
Performance
Lagopus or docker0
Server
ContainerL2Fwd or Linux
Bridge Container
pktgen-dpdk
OS: Ubuntu 16.04.1CPU: Xeon E5-2697 v2 @ 2.70GHzMem: 64GB
69Copyright©2017 NTT corp. All Rights Reserved.
Performance
70Copyright©2017 NTT corp. All Rights Reserved.
SDN IX @ Interop Tokyo 2015 ShowNet
Interop Tokyo is the biggest Internet-related technology show in Japan.This trial was collaboration with NECOMA project (NAIST & University of Tokyo)
71Copyright©2017 NTT corp. All Rights Reserved.
IX (Internet eXchange)Packet exchange point between ISP
and DC-SPBoarder router of ISP exchanges
route information Issue
Enhance automation in provisioning and configuration
DDoS attack is one of the most critical issues
• ISP wants to reduce DDoS-related traffic in origin
• DDoS traffic occupies link bandwidth
Motivation of SDN-IX
IX
ISP-CISP A ISP-DISP B
SW
SWSW
SW
ISP-EISP F
IX
ISP-CISP A ISP-DISP B
SW
SWSW
SW
ISP-EISP F
72Copyright©2017 NTT corp. All Rights Reserved.
What is SDN IX?
Next generation IX with SDN technology Web portal-based path provisioning between ISPs
• Inter-AS L2 connectivity– VLAN-based path provisioning– Private peer provisioning
Protect network from DDoS attack• On-demand 5-tuple-baesd packet filtering
SDN IX controller and distributed SDN/OpenFlow IX core switch
Developed by NECOMA project (NAIST and University of Tokyo)
ISP-CISP A ISP-DISP B
ISP-EISP F
ISP-CISP A ISP-DISP B
SW
SWSW
SW
ISP-EISP F
73Copyright©2017 NTT corp. All Rights Reserved.
Two Lagopus (soft switch) are deployed forSDN-IX core switchMultiple 10Gbps linksDual Xeon E5 8core CPUs
Lagopus @ ShowNet 2015
74Copyright©2017 NTT corp. All Rights Reserved.
Lagopus @ ShowNet rack
75Copyright©2017 NTT corp. All Rights Reserved.
Connectivity between AS
qfx10k ne5kx8
AS290AS131154
DIX-IEJPIXKDDI
CRS-4
10G-LR
lagopus-1(DPID:2)
pf5240-1(DPID:1)
ax.noteJGNX
lagopus-2(DPID:4)
pf5240-2(DPID:3)
xg-89:0.1(port 4)
xg-83:0.0(port 1)
xg-89:00.0(port 3)xg-83:00.1
(port 2)
xg-83:0.0(port 1)
xg-83:0.1(port 2)
xg-89:0.0(port 3)
xg-1-0-49(port 49)
xg-1-0-51(port 51)
xg-1-0-52(port 52) xg-1-0-50
(port 50)
xg-1-0-49(port 49)
xg-1-0-50(port 50)
xg-1-0-51(port 51)
799, 1600, 1060, 810, 910, 920 (tmporally)2, 3000
???
100
???
OtemachiMakuhari (Veneue)
76Copyright©2017 NTT corp. All Rights Reserved.
Average 2Gbps throughputNo packet dropNo reboot & no trouble for 1 week during Interop TokyoSometimes 10Gbps burst traffic
Traffic on Lagopus @Makuhari
77Copyright©2017 NTT corp. All Rights Reserved.
Big change happened
Before After
vSwitch has
lots of issues on performance,
scalability, stability, …..
vSwitch works well
without any trouble!
Good performance,Good stability.
78Copyright©2017 NTT corp. All Rights Reserved.
The SDI special prize of Show Award in Interop Tokyo 2015http://www.interop.jp/2015/english/exhibition/bsa.html
FinalistThe SDIShowNet demonstration
Award
79Copyright©2017 NTT corp. All Rights Reserved.
DPDK-enabled SDN/NFV middleware with Lagopus & VNF with Vhost @Interop Tokyo 2016
This trial was collaboration with University of Tokyo and IPIfusion
80Copyright©2017 NTT corp. All Rights Reserved.
NFV middleware for scale-out VNFs
Flexible load balance for VNFs with smart hash calculation and flow direction Hash calc: NetFPGA-SUME
• Hash calculation using IP address pairs• Hash value are injected to MAC src for flow direction for VNF
Classification and flow direction: Lagopus • Flow direction with MAC src lookup
HV VNF VNF VNF
lagopus
lagopus
uplink
downlink
hash calc & mac rewrite
MAC-based classification for
VMs
hash dl_srctype1 52:54:00:00:00:01type2 52:54:00:00:00:02
… …Type 256
52:54:00:00:00:FF
81Copyright©2017 NTT corp. All Rights Reserved.
Bird in ShowNet
Two Lagopus deploymentsNFV domain, SDN-IX
https://www.facebook.com/interop.shownet
82Copyright©2017 NTT corp. All Rights Reserved.
Challenges in Lagopus
vNIC between DPDK-enabled Lagopus and DPDK-enabled VNF (Virnos)
Many vNICs and flow director (loadbalancing)8 VNFs and total 18 vNICs
HV VirNOS VirNOS VirNOS VirNOS
lagopus
lagopus
port2
port4 port6 port8 port10
port9port7port5port3
port1
Eth0
Eth1
Eth0
Eth1
Eth0
Eth1
Eth0
Eth1
83Copyright©2017 NTT corp. All Rights Reserved.
Explicit resource assignment for performance
Packet processing workload aware assignment is required for Lagopus and VNF
MemoryMemory
NIC
core
core
core
core
core
core
core
core
core
core
core
core
core
core
core
core
CPU0 CPU1
Traffic
84Copyright©2017 NTT corp. All Rights Reserved.
Resource assign impacts in packet processing performance
Memory Memory
NIC
core
core
core
core
core
core
core
core
core
core
core
core
core
core
core
core
CPU0 CPU1
Traffic
Lagopus 8 VNFs
Memory Memory
NIC
core
core
core
core
core
core
core
core
core
core
core
core
core
core
core
core
CPU0 CPU1
Traffic
Lagopus8 VNFs
10Gbps
4.4Gbps
85Copyright©2017 NTT corp. All Rights Reserved.
HV VirNOS VirNOS VirNOS VirNOS
lagopus
lagopus
port2
port4 port6 port8 port10
port9port7port5port3
port1
Eth0
Eth1
Eth0
Eth1
Eth0
Eth1
Eth0
Eth1
DPDK-based system needs CPUs for I/O because polling-based network I/O in DPDK
Physical I/O are intensive compared to vNICs
CPU resource assignment for I/O (1/2)
85
10/4 Gbps 10Gbps
86Copyright©2017 NTT corp. All Rights Reserved.
HV VirNOS VirNOS VirNOS VirNOS
lagopus
lagopus
port2
port4 port6 port8 port10
port9port7port5port3
port1
Eth0
Eth1
Eth0
Eth1
Eth0
Eth1
Eth0
Eth1
Traffic-path-aware CPU assign 4 CPU core were assigned to I/O thread of Lagopus
CPU resource assignment for I/O (2/2)
86
10Gbps
10Gbps
5Gbps 5Gbps
87Copyright©2017 NTT corp. All Rights Reserved.
Performance evaluation
Good performance and scalability But long packet journey
Packet-in -> Physical NIC -> Lagopus -> vNIC -> VNF -> vNIC -> Lagopus -> Physical NIC -> Packet-out
[byte]
[Mbps]
0 200 400 600 800 1000 1200 14000
100020003000400050006000700080009000
10000
wire ratelagopus
Packet size
Traffi
c
88Copyright©2017 NTT corp. All Rights Reserved.
Other trials
89Copyright©2017 NTT corp. All Rights Reserved.
Location-aware packet forwarding + Service Chain (NFV integration)Location-aware transparent
security check by NFV Virtual Network
Intra network• Web service and clients• Malware site blocking
Lab network• Ixia tester for demo• Policy management (Explicit routing for
TE)
#1: Segment routing with Lagopus for campus network
lago0-0
lago0-1 lago1-1
lago1-0
lago2-1
lago2-0
nfv0
win00
Serv01
Serv00
win01 p1
p1
vFW vFW
web
Untrusted server block service
lago0-0
lago0-1
lago1-1
lago1-0
lago2-1
lago2-0
nfv0
Ixia
p3
vFW
vFW
Ixia
90Copyright©2017 NTT corp. All Rights Reserved.
Flexible video stream transmission for multiple-sites and devicesLagopus switch as stream duplicatorSimultaneous 4K video – 50 sites streaming
#2: transparent video stream duplication
111
EncoderDecoder
Live
IP NW
Cinema or public space
91Copyright©2017 NTT corp. All Rights Reserved.
#2: Blackboard streaming without Teacher’s shadow
Human shadow transparent module
Pattern APattern B
Realtime image processing with flow direction switchUser can select modes with teacher or without teacherNo configuration change without video options
Students can not see due to shadow Transparent processing help students
lagopus lago
pus
Realtime imageprocessing
TM
O3 project: providing flexible wide area networks with SDN
This research is executed under a part of a Research and Development of Network Virtualization Technology” program commissioned by the Ministry of Internal Affairs and Communications. (Y2013-2016)
TM
Open Innovation over Network Platform
• 3 kinds of Contributions for User-oriented SDN(1) Open development with OSS(2) Standardization of architecture and interface(3) Commercialization of new technologies
Toward open User-oriented SDN
©O3 Project 93
(1) Open (2) Standardization (3) Commercialization
TM
• Open, Organic, Optima– Anyone, Anything, Anywhere– Neutrality & Efficiency for Resource, Performance, Reliability, ….– Multi-Layer, Multi-Provider, Multi-Service
• User-oriented SDN for WAN– Softwarization: Unified Tools and Libraries– On-demand, Dynamic, Scalable, High-performance
• Features– Object-defined Network Framework– SDN WAN Open Source Software– SDN Design & Operations Guideline
• Accelerates– Service Innovation, Re-engineering, Business Eco-System
The O3 Project Concept, Approach, & Goal
©O3 Project 94
TM
The O3 User-oriented SDN Architecture
©O3 Project 95
Path Nodes( Opt ・ Pkt Transport) Switch Nodes( Lagopus, OF )D-planeC-plane
D-plane consists of Switch and Path Nodes; Switching Nodes provide programmability, and Path Nodes provide various type of network resources.
Orchestrator & Controllers can create and configure virtual networks according to SDN Users, and enable to customized control on individual D-Plane.
Virtual NW Virtual NW
OTTsCarriers
OTT-ACnt. Appl.
OTT-BCnt. Appl.
Controls on Virtual NW
View from Virtual NW
Network Orchestrator
Switch Nodes( Lagopus, OF)
Controller( スイッチ部 )
Controller( スイッチ部 )
Controller( Switch Nodes )
Controller( パス部 )
Controller( パス部 )
Controller( Path Nodes )
Controller( スイッチ部 )
Controller( スイッチ部 )
Controller( Switch Nodes )
Common ControlFramework
SDN Nodes
Multi-Layer,Multi-DomainControl
TM
• WAN experiments with Multi-vendor Equipment
Proof-of-Concept: Physical Configuration
©O3 Project 96
TM
PoC on Multi-Layer & Domain Control
©O3 Project 97
TM
PoC on Network Visualization
©O3 Project 98The Hands-on training for ASEAN Smart Network
99Copyright©2017 NTT corp. All Rights Reserved.
Ryu SDN Framework
http://osrg.github.io/ryu/
100Copyright©2017 NTT corp. All Rights Reserved.
OSS SDN Framework founded by NTTSoftware for building SDN control plane agilelyFully implemented in PythonApache v2 licenseMore than 350 mailing list subscribers
Supporting the latest southbound protocolsOpenFlow 1.0, 1.2, 1.3, 1.4 (and Nicira extensions)BGPOfconfig 1.2OVSDB JSON
What’s RYU?
101Copyright©2017 NTT corp. All Rights Reserved.
Many users
and more…
102Copyright©2017 NTT corp. All Rights Reserved.
Developed mainly for network operatorsNot for one who sells the specific hardware switch
Integration with the existing networksGradual SDN’ing’ the existing networks
Ryu development principles
103Copyright©2017 NTT corp. All Rights Reserved.
Your application are free from OF wire format (and some details like handshaking)
What ‘supporting OpenFlow’ means?
PythonObject
OF wireprotocol
DataPlane
Ryu converts it
PythonObject
OF wireProtocol
Ryu generates
Your applicationdoes something here
104Copyright©2017 NTT corp. All Rights Reserved.
Ryu development is automated
github
Push the new code
Unit tests are executed
Docker hub image is updated
Ryu certification is executed on test lab
Ryu certification site is updated
You can update your Ryu environmentwith one command
105Copyright©2017 NTT corp. All Rights Reserved.
Lessons leaned
106Copyright©2017 NTT corp. All Rights Reserved.
What’s OpenStack?OSS for building IaaSYou can run lots of VMsMany SDN solutions are supported
What SDN means for OpenStack?The network for your VMs are separated from othersVirtual L2 network on the top of L3 network
SDN in OpenStack
107Copyright©2017 NTT corp. All Rights Reserved.
Virtual L2 on tunnels (VXLAN, GRE, etc)
Typical virtual L2 implementation
OVS
Agent
Compute node
VM VM
OVS
Agent
Compute node
VM VM
OVS
Agent
Compute node
VM VM
OVS
Agent
Compute node
VM VM
Tunnel
108Copyright©2017 NTT corp. All Rights Reserved.
People advocated something like this
DataPlane
DataPlane
DataPlane
OpenFlow Controller
OpenFlow Protocol
Application Logic
109Copyright©2017 NTT corp. All Rights Reserved.
Same as other OpenFlow controllersThe controller are connected with all the OVSes
Our first version OpenStack integration
Plugin
NeutronServer
Ryu
OVS
RYUAgent
Compute node
VM VM
Custom REST API
OpenFlow
OVS
Agent
Compute node
VM VM
OVS
Agent
Compute node
VM VM
OpenStack REST API
SDNOperationalIntelligence
110Copyright©2017 NTT corp. All Rights Reserved.
Scalability Availability
What’s the problems?
111Copyright©2017 NTT corp. All Rights Reserved.
How many a single controller can handle?Can handle hundreds or thousands?
Controller does more than setting up flowsReplying to ARP packet requests rather than sending ARP packets to
all compute nodesMaking OVS work as L3 router rather than sending packets to a
central routerYou could add more here
Scalability
112Copyright©2017 NTT corp. All Rights Reserved.
The death of a controller leads to the dead of the whole cloudNo more network configuration
Availability
113Copyright©2017 NTT corp. All Rights Reserved.
OFC on every compute node One controller handles only one OVS
Our second verion (OFAgent driver)
NeutronServer
OVS
RYUAgent
Compute node
VM VM
OVS
RYUAgent
Compute node
VM VM
OVS
RYUAgent(OFC)
Compute node
VM VM
OpenStack standard RPCOver queue system
Released in Icehouse
OpenStack REST API
Openflow is used only inside a compute node• Scalable with the number of compute nodes• No single point of failure in OFAgent
SDNOperationalIntelligence
114Copyright©2017 NTT corp. All Rights Reserved.
Push more features to edgesDistribute featuresPlace only a feature (e.g. TE) on central node you can’t distribute
Couple loosely a central node and edgesTight coupling doesn’t scale (e.g. OpenFlow connections between a
controller and switches)The existing technology like queue works
SDN deployment for scale
115Copyright©2017 NTT corp. All Rights Reserved.
NSA (National Security Agency)
More users: Tracking network activities
“The NSA is using NTT’s Ryu SDN controller. Larish says it’s a few thousand
lines of Python code that’s easy to learn, understand, deploy and troubleshoot”
http://www.networkworld.com/article/2937787/sdn/nsa-uses-openflow-for-tracking-its-network.html
116Copyright©2017 NTT corp. All Rights Reserved.
TouIX (IX in France) : Replacing expensive legacy switch with whitebox switch and Ryu
More users: IX (Internet Exchange)
“The deployment is leveraging Ryu, the NTT Labs open-source controller”
http://finance.yahoo.com/news/pica8-powers-sdn-driven-internet-120000932.html
Zebra 2.0Open Source Routing Software
Open Source Revisited• Apache License• Written From Scratch in Go• Go routine & Go channel is used for
multiplexing• Task Completion Model + Thread Model• Single SPF Engine for OSPFv2/OSPFv3/IS-
IS• Forwarding Engine Abstraction for
DPDK/OF-DPA• Configuration with Commit/Rollback• gRPC for Zebra control
Architecture
• Single Process/Multithread Architecture
BGP OPSF RSVP-TE LDP
FEA
OpenConfigd• Commit & Rollback support configuration system• Configuration is defined by YANG• CLI, NetConf, REST API is automatically
generated• `confsh` - bash based CLI command• OpenConfig is fully supported
OpenConfigd Architecture
• gRPC is used for transport• completion/show/config APIs between shell
OpenConfigd
Zebra 2.0 Lagopus
confsh
DB
completion show config
API API
Forwarding Engine Abstraction• Various Forwarding Engine Exists Today
• OS Forwarder Layer• DPDK • OF-DPA
• FEA provides Common Layer for Forwarding Engine• FEA provides
• Interface/Port Management• Bridge Management• Routing Table• ARP Table
123Copyright©2017 NTT corp. All Rights Reserved.
Current limitation of software switch # of OF apps is limited
Leverage Network OS for whtitebox switch • Opensnaproute, Linux kernel, OpenSwitch
Hard to integrate with other management systemOpenStack, libvirt, nFlow, sFlow, BGP-extension
OF pipeline does not cover all our requirementsTunnel termination
• IPsec, VxLAN,Control packet escalation/injectionUser-defined OAM functionality
Heavy packet processing for OpenFlow flow entryLookup, action, long pipeline
Balance of programmability and existing network protocol supportL2, L3 (IPv4, IPv6), GRE, VxLAN, MPLSHybrid traffic control
124Copyright©2017 NTT corp. All Rights Reserved.
Provide router-aware programmable dataplane for network OSProtocol-aware pipeline and APIs, OpenFlowIntegration with network OSExisting forwarding &routing protocols
support (BGP, OSPF) VPN framework over IP networks
IP as a transport protocolVxlan, GRE, IPsec tunnel support
Decouple OpenFlow semantics and Wireprotocol from OpenFlow protocolProvide gRPC switch control API
Next major upgrade: Lagopus SDN switch router
125Copyright©2017 NTT corp. All Rights Reserved.
Forwarding Engine Integration
dataplane
Lagopus
BGP OPSF RSVP-TE
LDP
FEA
Zebra 2.0
OpenConfigd DB
Dataplane Manager
Configuration datastore
RIB/FIB control Interface/Port
bridge mngmt
Interface/Portbridge mngmt
C-plane related packet
User-traffic User-traffic
C-plane related packet
C-plane packet escalation via tap IF
FIB
ARP
Stats
DB
C-plane packet escalation via tap IF
126Copyright©2017 NTT corp. All Rights Reserved.
New version will available this summer
Availability?
127Copyright©2017 NTT corp. All Rights Reserved.
Hirokazu Takahashi, Tomoya Hibi, Ichikawa, Masaru Oki,
Motonori Hirano, Kiyoshi Imai, Takaya Hasegawa, Tomohiro Nakagawa, Koichi Shigihara, Keisuke Kosuga, Takanari Hayama, Tetsuya Mukawa, Saori Usami, Kunihiro Ishiguro
Thanks our development team
128Copyright©2017 NTT corp. All Rights Reserved.
Comments and collaboration are very welcome!
Webhttps://lagopus.github.io
GithubLagopus vswitch
• https://github.com/lagopus/lagopusLagopus book
• http://www.lagopus.org/lagopus-book/en/html/Ryu with general tunnel ext
• https://github.com/lagopus/ryu-lagopus-ext
Conclusion
129Copyright©2017 NTT corp. All Rights Reserved.
Questions?
Recommended