24
An Analysis and Empirical Study of Container Networks Kun Suo * , Yong Zhao * , Wei Chen , Jia Rao * University of Texas at Arlington * , University of Colorado, Colorado Springs INFOCOM 2018@Hawaii, USA 1

An Analysis and Empirical Study of Container Networksranger.uta.edu/~jrao/papers/INFOCOM18_slides.pdfContainer Networkingin a Single VM 0 0.2 0.4 0.6 0.8 1 1.2 o o Upload Download

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

AnAnalysisandEmpiricalStudyofContainerNetworks

KunSuo*,YongZhao*,WeiChen⌃,JiaRao*

UniversityofTexasatArlington*, UniversityofColorado,ColoradoSprings⌃

INFOCOM2018@Hawaii,USA

1

TheRise ofContainers

• Containersarea lightweight alternativeto virtual machines for applicationpackaging

• Key benefits of containersü Rapid deployment

ü Portability

ü Isolation

ü Lightweight, efficiency, and density

• Increasingly and widely-adopted indatacenters

ü GoogleSearchlaunchesabout7,000containerseverysecond

2

ContainerNetworksintheCloudCloud

AmazonEC2

DockerCloud

MicrosoftAzure

Otherclouds

3

• Typical use case: containers running in VMs• Challenging to select an appropriate container network

NetworkonasinglehostBridge(Default)

NoneHost

Bridge(Default)None

ContainerHost

NAT(Default)TransparentOverlayL2BridgeL2Tunnel

NetworkonmultiplehostsNAT(Default)

OverlayThirdpartysolutions

Overlay(Default)

NAT(Default)TransparentOverlayL2BridgeL2Tunnel

ContainerNetworkingProjects

4

5

Container networks provide connectivityamong isolated, sandboxed applications

• Aqualitative comparisonü Applicable scenarios

ü Security isolation

• Anempiricalstudyü Throughput/Latency

ü Scalability

ü Overhead/start-up cost

ContainerNetworksonaSingleHost

• NoneüAclosednetworkstackandnamespace

üHigh security isolation

• Host modeüSharethe networkstackandnamespaceofthehostOS

üLow security isolation

6

ContainerNetworksonaSingleHost

• Bridge modeü Thedefault networksettingofDocker

ü AnisolatednetworknamespaceandanIPaddressforeachcontainer

ü Moderate security isolation

• Container modeü A group of containers share onenetworknamespace and IP address

ü Low isolationwithin the same group andmoderate isolation across groups

7

ContainerNetworksonaSingleHost

Network Intra-machinecommunication

Inter-machinecommunication

Accesstoexternalnetworks

Namespace Security

None / / / Independent,isolated High

Bridge docker0 bridge / Bindhostport,NAT

Independent,isolated Moderate

Container Inter-processcommunication / Port

binding,NATSharedwithgroupleader Medium

Host Hostnetworkstack

Hostnetworkstack

Hostnetworkstack

SharedwiththehostOS Low

8

ContainerNetworksonMultiple Hosts

• Host modeü Communicate through host networkstackandIPü Pros: near-native performanceü Cons: no security isolation

• Networkaddresstranslation (NAT)ü Bind a private container IP to the host public IPand a port number. The docker0 bridgetranslates between the private and public IPaddresses

ü Pros: Easy configurationü Cons: IP translation overhead, inflexible due tohostIP binding and portconflicts

9

ContainerNetworksonMultiple Hosts

• Overlay network

ü A virtual network built on top ofanother network throughpacket encapsulation

ü Examples: IPIP,VXLAN,andVPN,etc.

ü Pros: isolation, easytomanage,resilient to network topologychange

ü Cons: overhead due to packetencapsulation anddecapsulation, difficulttomonitor

10

ContainerNetworksonMultiple Hosts

• Routing

ü A network layer solution basedon BGP routing

ü Pros: highperformance

ü Cons: BGPnotwidelysupportedindatacenternetworks, limitedscalability, not suitable for highlydynamic networks or short-livedcontainers

11

ContainerNetworksonMultiple Hosts

Network Howitworks Protocol K/Vstore Security

Host Sharinghostnetworkstackandnamespace ALL No No

NAT Hostnetworkportbindingandmapping ALL No No

Overlay VXLANorUDPorIPIP Depends Depends Encryptedsupport

Routing BorderGatewayProtocol Depends Yes Encrypted

support

12

AnEmpiricalStudy

• Containers in a single VMü How much are the overheads of single-host networking modes?

• Containers in multiple VMs on the same PMü How much are the overheads of cross-host networking?

• Containers in multiple PMs vs. containers in multipleVMs on different PMs

ü The interplay between VM network and container networks?

• Impact of packet size and protocol

• Scalability and startup cost

13

Experimentsettings

• Hardwareü TwoDELLPowerEdgeT430servers,equippedwithadualten-coreIntelXeonE5-26402.6GHzprocessor,64GBmemory,a2TB7200RPMSATAharddisk,GigabitEthernet

• Softwareü Ubuntu16.10,Linuxkernel4.9.5,KVM2.6.1ashypervisor,DockerCE1.12,rtl8139NICdrivers

ü Etcd 2.2.5,weave1.9.3,flannel0.5.5andcalico2.1

• Benchmarksü Netperf 2.7,Sockperf 2.8,Sparkyfish 1.2,OSUbenchmarks5.3.2

14

ContainerNetworking inaSingleVM

00.20.40.60.81

1.2

Upload DownloadThroughp

utre

lativeto

w/o

containe

r

Sparkyfish throughput

w/ocontainer

bridge

container

host0

500

1000

1500

2000

TCP UDP

Throughp

ut(M

Bps)

Sockperf throughput

w/ocontainer

bridge

container

host

010002000300040005000

1 4 16 64 256 1k 4k

Band

width(M

Bps)

Packetsize(Bytes)

OSUbenchmarkbandwidth

w/ocontainer

bridge

container

host0

5

10

15

20

1 4 16 64 256 1k 4k

Latency(us)

Packetsize(Bytes)

OSUbenchmarklatency

w/ocontainer

bridge

container

host

15Thecontainermodeandhostmodeachievedcloseperformancetothebaseline

whilethebridgemodeincurred significantperformance loss.

Diagnosis of Bridge-based Container Networking

W/ocontainer Bridgemode

Overheadon bridgesinside network stack

16

• Longer criticalpathofpacketprocessingdue to centralizedbridgedocker0• Higher CPUusage andpossible queuingdelays

ContainerNetworking across MultipleVMs

Network CPU(c) CPU(s) S.dem(c) S.dem(s)w/ocontainer 33.37 18.41 75.466 41.626Hostmode 34.19 22.04 82.403 53.127DefOverlay 38.75 42.91 95.867 106.145Weave 54.92 47.56 150.678 130.478Flannel 42.96 40.44 127.118 119.659

Calico(IPIP) 38.53 40.53 107.854 113.465Calico(BGP) 37.72 36.92 99.665 95.035

00.20.40.60.81

1.21.41.61.8

TCPthroughput Avglatency

w/ocontainer

host-mode

NAT

def-overlay

weave

flannel

calico(IPIP)

calico(BGP)

Overheadofaddresstranslation

Overheadofpacketrouting

Overheadofpacketencapsulationanddecapsulation;additionalbridgeprocessing;prolongedprocessinginsidethekernel;etc.

AlloverlaynetworksconsumedmuchmoreCPU (Mostly in softIRQ).

17

Diagnosis of Overlay Networks

W/ocontainer Containerinoverlaynetwork

• muchlongercriticalpathinsidethekernel• moreCPUusage,moresoft interrupt

processing

Overheadon bridges

18

Overheadon VXLAN

Impact of PacketSizeandProtocol

020406080100120140

16 32 64 128 256 512 1024

TCPthroughp

ut(M

Bps)

Packetsize(Bytes)

w/ocontainer

host-mode

NAT

def-overlay

weave

flannel

calico(IPIP)

calico(BGP)

0

10

20

30

40

50

60

16 32 64 128 256 512 1024UDP

throughp

ut(M

Bps)

Packetsize(Bytes)

w/ocontainer

host-mode

NAT

def-overlay

weave

flannel

calico(IPIP)

UnderTCP,overlay networks do not scaleasthepacketsizeincreases because they are bottlenecked the packet processing rate

All networks scale betterunder UDP than that underTCP, though the actual throughput is much lower

19Fixed packet rate, throughput should scale with packet size

Interplays between VM networkvirtualization and Container networks

0

20

40

60

80

100

120

TCPthroughp

ut(M

Bps)

16Bytes(PM) 16Bytes(VM) 1024Bytes(PM) 1024Bytes(VM)PM

… …

PM

VM VM

20

PM

… …

PM

• The two-layer virtualization induces additional degradation on top ofvirtualization overheads

• Overlay networks suffer most degradation

VM networkoverheads

Containernetworkoverheads

Scalability

Host

Host Host

Host Host

… …

0102030405060

2 3 4 5 6 7 8MPIalltoalllatency(μs)

Numberofcontainersorprocesses

ContainersonasingleVM

w/oContainer w/Container

0

200

400

600

800

2 3 4 5 6 7 8

MPIalltoalllatency(μs)

Numberofcontainersorprocesses

ContainersonmultiVMs(1:N-1)

w/oContainer w/Container

0

500

1000

1500

2 3 4 5 6 7 8

MPIalltoalllatency(μs)

Numberofcontainersorprocesses

ContainersonmultiVMs(N/2:N/2)

w/oContainer w/Container

Bridgedocker0limits thescalability of containernetwork in a single node

The communication over theoverlay network is a majorbottleneck for containernetwork across hosts

21

Bridge mode Docker overlay Docker overlay

Container

Network Startup Time

Singlehost Launchtime Multiplehosts LaunchtimeNone 539.6ms Host 497.4msBridge 663.1ms NAT 674.5ms

Container 239.4ms DockerOverlay 10,979.8msHost 497.4ms Weave 2,365.2ms/ / Flannel 3,970.3ms/ / Calico (IPIP) 11,373.1ms/ / Calico (BGP) 11,335.2ms

Thestartup timewasmeasuredasthetimesince acontainercreatecommandisissueduntil thecontainerisresponsivetoa networkpingrequest.

Event Function Data / Cloud service

ServerlessApplications

Network setup time indocker startup time;attaching to an existingnetwork in the containermode requires leastsetup time

Overlay and routing-based networks requiremuch longer startup time

22

Insights & Takeaways• Challenging to determineanappropriatenetworkforcontainerized applications

ü Performance vs.security vs. flexibilityü Small packets vs. large packetsü TCP vs. UDP

• Bridging is a major bottleneckü Linux bridge and OVS have the similar issueü Avoid bridge mode if containers do not need to access external networks

• Overlay networks are most convenient but expensiveü The existing network stack is inefficient in handling packet encapsulation

and decapsulation

• Optimizing container networksü Streamlining the asynchronous operations in the network stackü Making the network stack aware of packets of overlay networksü Coordinating VM-level network virtualization and container networks

23

Thank you !Questions?

24