58
Kubernetes Networking @AWS

Kubernetes networking in AWS

Embed Size (px)

Citation preview

Page 1: Kubernetes networking in AWS

Kubernetes Networking @AWS

Page 2: Kubernetes networking in AWS

AWSInfrastructure constructs

Network Security

Services

Page 3: Kubernetes networking in AWS

AWS Infrastructure ComponentsRegion

AZ

AZ

VPC

Security Group

Security Group Routing Table

Instance

Instance

InternetGW

VPC router

Subnet

Virtual Private GW

Internet

Elastic IP

Corporate

Instance

Eni (interface)

CustomerGW

EniInstance Subnet

Regions - used to manage network latency and regulatory compliance per country. No Data replication outside a region

Availability Zones - at least two in a region.Designed for fault isolation. Connected to multiple ISPs and different power sources. Interconnected using LAN speed for inter-communications within the same region

VPC – spans all region’s AZs. Used to create isolatedprivate cloud within AWS. IP ranges allocated by the customer.Networking – interfaces, subnets, routing tables and gateways (Internet, NAT and VPN).Security – security groups

Interface (ENI) – can include primary, secondary or elastic IP. Security group attaches to it. Independent from the instance (even though primary interface cannot be detached from an instance).

Subnet – connects one or more ENIs, can talk to another subnet only through a L3 router. Can connected to only one routing table. Cannot span AZs

Routing table – decides where network traffic goes. May connect to multiple subnets. 50 routes limit per table

Page 4: Kubernetes networking in AWS

VPC Security ComponentsSecurity group - virtual firewall for the instance to control inbound and outbound traffic.

• Applied on ENI (instance) only

• No deny rules

• Stateful – return traffic implicitly allowed

• All rules evaluated before decision

• Up to five per instance

Network ACL- virtual IP filter on the subnet level

• Applied on subnet only

• Allows deny rules

• Stateless – return traffic should be specified

• First match takes

Region

AZ

AZ

VPC

Security Group

Security Group Routing Table

Instance

Instance

Internet

GW

VPC router

Subnet

Virtual Private GW

Internet

Elastic IP

Corporate

Instance

Eni (interface)

CustomerGW

EniInstance Subnet

Network ACL

Page 5: Kubernetes networking in AWS

Network segmentation

VPC isolation – the best way for separating customers (obvious) and different organizations without messing with security groups.

• AWS VPC – Great even for internal zoning. No need for policies

• Security group – Statefull and flexible. Network location agnostic

• Network ACL – good for additional subnet level control

VPC R&D

Security Group

Security Group Routing Table

Instance

Instance

VPC router

Subnet

Virtual Private GW

Instance

Eni (interface)

Eni

Instance Subnet

Network ACL

VPC - Production

Security Group

Security Group Routing Table

Instance

Instance

VPC router

Subnet

Virtual Private GW

Instance

Eni (interface)

Eni

Instance Subnet

Network ACL

Implicit network segmentationExplicit instance segmentation

Explicit network segmentation

Page 6: Kubernetes networking in AWS

VPC integration with other AWS services

Elastic load balancing –

• Types – Classic and application

• Classic is always Internet exposed

• Application LB can be internal

• ELB always sends traffic to private IP backends

• Application ELB can send traffic to containers

Region

AZ

AZ

VPC

Security Group

Security Group Routing Table

Instance

Instance

InternetGW

VPC router

Subnet

Virtual Private GW

Internet

Elastic IP

Corporate

Instance

Eni (interface)

Customer GW

EniInstance Subnet

Internal ELB

Outer ELB

Page 7: Kubernetes networking in AWS

VPC integration with other AWS services

AWS Simple Storage Service (S3) -• Opened to the internet• Data never spans multiple

regions unless transferred• Data spans multiple AZs• Connected to VPC via a

special endpoint• The endpoint considered

and interface in the routing table

• Only subnets connected to the relevant routing table can use the endpoint

Region

AZ

AZ

VPC

Security Group

Security Group Routing Table

Instance

Instance

InternetGW

VPC router

Subnet

Virtual Private GW

Internet

Elastic IP

Corporate

Instance

Eni (interface)

CustomerGW

EniInstance Subnet

Endpoint

Page 8: Kubernetes networking in AWS

VPC integration with other AWS services

Lambda –

• Service that runs a selected customer’s code

• Runs over a container located in AWS own compute resources

• Initiate traffic from an IP outside of the VPC

• Single Lambda function can access only one VPC

• Traffic from Lambda to endpoints outside the VPC should be explicitly allowed on the VPC

Region

AZ

AZ

VPC

Security Group

Security Group Routing Table

Instance

Instance

InternetGW

VPC router

Subnet

Virtual Private GW

Internet

Elastic IP

Corporate

Instance

Eni (interface)

CustomerGW

EniInstance Subnet

Endpoint

Page 9: Kubernetes networking in AWS

Inter-region interface

VPC isolation – the best way for separating customers (obvious) and different organizations without messing with security groups.

• Complete isolation

• Only through the internet over VPN connection

Amsterdam

Security Group

Security Group Routing Table

Instance

Instance

VPC router

Subnet

Virtual Private GW

Instance

Eni (interface)

Eni

Instance Subnet

US Virginia

Security Group

Security Group Routing Table

Instance

Instance

VPC router

Subnet

Virtual Private GW

Instance

Eni (interface)

Eni

Instance Subnet

Network ACL

Internet

Internet GWInternet GW

Page 10: Kubernetes networking in AWS

Networking the containerized environmentMajor Concepts

Page 11: Kubernetes networking in AWS

Containerized applications networkingWhat are we looking for?

• Service discovery – automated reachability knowledge sharing between networking components

• Deployment – standard and simple. No heavy network experts involvement.

• Data plane – direct access (no port mapping), fast and reliable

• Traffic type agnostic – multicast, IPv6

• Network features - NAT, IPAM, QoS

• Security features - micro-segmentation, access-control, encryption…

• Public cloud ready – Multi VPC and AZs support, overcome route table, costs.

• Public cloud agnostic - dependency on the provider’s services – as minimal as possible

Page 12: Kubernetes networking in AWS

Three concepts around• Overlay – A virtual network decoupled from the underlying physical

network using a tunnel (most common - VXLAN)

• Underlay – attaching to the physical node’s network interfaces

• Native L3 routing - L3 routing, advertising containers/pod networks to the network. No overlay

Page 13: Kubernetes networking in AWS

Overlay only approachImplementations – Flannel, Contiv, Weave, Nuage

Data Plane –

• Underlying network transparency

• Via kernel space – much less network latency

• Overhead - adds 50 bytes to the original header.

• Traffic agnostic- Passes direct L2 or routed L3 traffic between two isolated segments. IPv4/IPV6/Multicast

Control plane –

• Service and network discovery – key/value store (etcd, Consul …)

• VNI field - identifies layer 2 networks allowing isolation between them. Routing between two separate L3 networks – via an external router (VXLAN aware)

• VTEP (VXLAN tunnel endpoints) – The two virtual interfaces terminating the tunnel. Instances’ vNIC

Page 14: Kubernetes networking in AWS

Underlay - MACVLAN• Attaches L2 network to the node’s physical

interface by creating a sub-interface

• Each of the sub-interfaces use different MAC

• Pod belongs to the attached network will be directly exposed to the underlying network w/o port mapping or overlay

• Bridge mode - most commonly used, allows pods, containers or VMs to internally interconnect - traffic doesn’t leave the host

• AWS –• Disable IP src/dst check

• Promiscuous mode on the parent nic

• Verify MAC address per NIC limitation

POD

Container Container

Bridge

Eth0.45MACVLAN

veth

eth0

Eth0

EXT

Page 15: Kubernetes networking in AWS

Native L3 only approachImplementation – Calico, Romana

Data Plane –

• No overlays – direct container to container (or pod) communications using their real IP addresses, leveraging routing decisions made by container hosts and network routers (AWS route table)

Control plane –

• Containers/Pods/Services IPs being published to the network using routing protocol such as BGP

• Optional BGP peering – between containers nodes for inter-container communications and/or with upstream router for external access

• Large scale – route-reflector implementation may be used

• Due to the L3 nature, native IPv6 is supported

• Optionally NAT is supported for outgoing traffic

Page 16: Kubernetes networking in AWS

Networking models - ComparisonCategory\Model Overlay L3 routing Comments

Simple to deploy Yes No L3 BGP requires routing config

Widely used Yes No VXLAN – supported by most plugins

Traffic type agnostic

Yes Yes* *Driver support dependent

Allows IPduplication

Yes No L3 need address management

Public Cloudfriendly

Yes No

L3 – requires special config on AWS routing tables*HA - two different AZ’s subnets still requires tunneling**

Host local routing

No YesInter-subnet routing on same host goes outExternal plugins – overcome – split routing

Underlyingnetwork independency

Yes NoL3 needs BGP peering config for external comm.

Performance Yes* Yes*Depends on data path – user or kernel space

Network Efficiency

No Yes Overlay adds overhead

Page 17: Kubernetes networking in AWS

Common Implementation Concepts

• The majority of plugins – combine overlay (mostly VXLAN) and L3

• Subnet allocated per node (Nuage is an exception)

• Based on agent installed on the node (project proprietary or Open vSwitch)

• Local routing on the node between different subnets

• Support routing to other nodes (needs L2 networks between nodes)

• Public clouds integration provided for routing table update (limited comparing to standard plugins)

• Performance - Data path in Kernel space

• Distributed or policy based (SDN)

Page 18: Kubernetes networking in AWS

Flannel (CoreOS) – The proprietary example• Used for dual OVS scenarios (Openstack)

• Flanneld agent on the node – allocates a subnet to the node and register it in the etcd store installed on each node

• No Security policy currently supported - A new project, Canal, combines Flannel and Calico for a whole network and security solution

• Subnet cannot span multiple hosts

Three implementations:

• Overlay – UDP/VXLAN – etcd used for control plane

• Host-gw – direct routing over L2 network with routing on the node’s routing table – Can be used in AWS also – preforms faster

• AWS-VPC – direct routing over AWS VPC routing tables. Dynamically updates the AWS routing table (50 route entries limits in a routing table. If more needed, VXLAN can be used).

Page 19: Kubernetes networking in AWS

Flannel Direct

Flanneld

Node1Master

Flanneld Namespace

Docker0Linux

Bridge

172.17.43.1

Namespace

Container

veth

etcd

AWS VPC

Namespace

Docker0Linux

Bridge

172.17.42.1

Namespace

Container

veth

Flanneld Flanneld

AWS route table

AWS-VPC - Control Plane

Direct - Data Plane

Flanneld Flanneld

Host route table Host route table

HOST-GW - Control Plane

No overlappingNo overlapping

eth0eth0veth

172.17.43.2veth

172.17.42.2

Static route via node1

Page 20: Kubernetes networking in AWS

Flannel OVERLAY

Flanneld

KUB-NODEKUB-MASTER

FlanneldNamespace

Docker0Linux

Bridge

172.17.42.1

Namespace

Container

veth172.17.42.2

veth

etcdetcd

Namespace

Docker0Linux

Bridge

172.17.42.1

Namespace

Container

veth172.17.42.2

veth

VXLANFlanneld Flanneld

VXLAN - Control Plane

Overlay - Data Plane

VXLAN vtep

VXLAN vtep

vxlanvxlan

No overlappingOverlapping

Page 21: Kubernetes networking in AWS

OpenShift Networking over KubernetesOVS-based solution

Concepts

Implementation alternatives

Page 22: Kubernetes networking in AWS

PAUSE

Container

PAUSE

Container

etcd DNS

Docker

POD• A group of one or more

containers• Contains containers which

are mostly tightly related• Mostly ephemeral in nature• All containers in a pod are in

the same physical node• The PAUSE container

maintains the networking

KUBE-PROXY• Assigns a listening port for a

service• Listen to connections

targeted to services and forwards them to the backend pod

• Two modes – “Userspace” and “IPTables”

Kubelet• Watches for pods

scheduled to node and mounts the required volumes

• Manages the containers via Docker

• Monitors the pods status and reports back to the rest of the system

Replication ControllerEnsures that a specified number of pod “replicas” are running at any timeCreates and destroys pods dynamically

DNSMaintains DNS server for the cluster’s services

etcdKey/Value store the API serverAll cluster data is stored hereAccess allowed only to API server

API ServiceThe front-end for the Kubernetes control plane. It is designed to scale horizontally

Page 23: Kubernetes networking in AWS

So why not Docker’s default networking?• Non-networking reason - drivers integration issues and low level

built-in drivers (at least initially)

• Scalability (horizontality) – Docker’s approach to assign IPs directly to containers limits scalability for production environment with thousands of containers. Containers network footprint should be abstracted

• Complexity – Docker’s port mapping/NAT requires messing with configuration, IP addressing management and applications’ external port coordination

• Nodes resource and performance limitation – Docker’s port mapping might suffer from ports resource limitations. In addition, extra processing required on the node

• CNI model was preferred over the CNM, because of the container access limitation

Page 24: Kubernetes networking in AWS

Kubernetes native networking• IP address allocation – IP give to pods rather that to containers

• Intra-pod containers share the same IP

• Intra-pod containers use localhost to inter-communicate

• Requires direct multi-host networking without NAT/Port mapping

• Kubernetes doesn’t natively give any solution for multi-host networking. Relies on third party plugins: Flannel, Weave, Calico, Nuage, OVS etc.

• Flannel was already discussed previously as an example to overlay networking approach

• OVS will be discussed later as OVS based networking plugins

• Nuage solution will be separately dicussed

Page 25: Kubernetes networking in AWS

Kubernetes - PodWhen POD is created with containers, the following happens:

• “PAUSE” container created –

• “pod infrastructure” container – minimal config

• Handles the networking by holding the networking namespace, ports and IP address for the containers on that pod

• The one that actually listens to the application requests

• When traffic hits, it’s redirected by IPTABLES to the container that listens to this port

• “User defined” containers created –

• Each use “mapped container” mode to be linked to the PAUSE container

• Share the PAUSE’s IP address

apiVersion: v1kind: Podmetadata:

labels: deployment: docker-registry-1deploymentconfig: docker-registrydocker-registry: default

generateName: docker-registry-1-spec:

containers: - env:

- name: OPENSHIFT_CA_DATAvalue: ...

- name: OPENSHIFT_MASTERvalue: https://master.example.com:8443

ports: - containerPort: 5000

protocol: TCPresources: {}securityContext: { ... }

dnsPolicy: ClusterFirst

Page 26: Kubernetes networking in AWS

Kubernetes - Service• Abstraction which defines a logical set of pods and a

policy by which to access them

• Mostly permanent in nature

• Holds a virtual IP/ports used for client requests (internal or external)

• Updated whenever the set of pods changes

• Use labels and selector for choosing the backend pods to forward the traffic to

When a service is created the following happens:

• IP address assigned by the IPAM service

• The kube-proxy service on the worker node, assigns a port to the new service

• Kube-proxy generates iptables rules for forwarding the connection to the backend pods

• Two Kube-proxy modes….

apiVersion: v1kind: Servicemetadata:

name: docker-registry spec:

selector: docker-registry: default

portalIP: 172.30.136.123 ports:- nodePort: 0

port: 5000 protocol: TCPtargetPort: 5000

Page 27: Kubernetes networking in AWS

Kube-Proxy ModesKube-Proxy always writes the Iptables rules, but what actually handles the connection?

Userspace mode – Kube-proxy is the one that forwards connections to backend pods. Packets move between user and kernel space which adds latency, but the application continues to try till it finds a listening backend pod. Also debug is easier

Iptables mode – Iptables from within the kernel, directly forwards the connection to the pod. Fast and efficient but harder to debug and no retry mechanism

IptablesConnection ConnectionKube-proxy

serviceConnection pod

Writes iptables rules according to service definition

IptablesConnectionKube-proxy

servicepod

Writes iptables rules according to service definition

Page 28: Kubernetes networking in AWS

OpenShift SDN - Open Vswitch (OVS) – The foundation • A multi-Layer open source virtual switch

• Doesn’t support native L3 routing need the Linux kernel or external componenet

• Allows network automation through various programmatic interfaces as well as built-in CLI tools

• Supports:• Port mirroring

• LACP port channeling

• Standard 802.1Q VLAN trunking

• IGMP v1/2/3

• Spanning Tree Protocol, and RTSP

• QoS control for different applications, users, or data flows

• port level traffic policing

• NIC bonding, source MAC addresses LB, active backups, and layer 4 hashing

• OpenFlow

• Full IPv6 support

• kernel space forwarding

• GRE, VXLAN and other tunneling protocols with additional support for outer IPsec

Page 29: Kubernetes networking in AWS

Linux Network Namespace• Logically another copy of the network

stack, with its own routes, firewall rules, and network devices

• Initially all the processes share the same default network namespace from the parent host (init process)

• A pod is created with a “host container” which gets its own network namespace and maintains it

• “User containers” within that pod join that namespace POD

Container Container

localhost

PAUSE

br0 (OVS)

POD

Container Container

localhost

PAUSE

Default Namespace

eth0

veth

eth0

eth0

veth

Namespace

Page 30: Kubernetes networking in AWS

OpenShift SDN - OVS Management

Tun0 - OUT

User space

Kernel space

vETH

ovs-vsctl ovs-dpctlovs-appctl

Container

OVSDB – configures and monitors the OVS itself (bridges, ports..)

Configures and monitors the ovsdb: bridges, ports, flows

OpenFlow – programs the OVS daemon with flow entries for flow-based forwarding

The actual OVS daemon• Process openflow messages• Manage the datapath (which

actually in kernel space)• Maintain two flow table

(exactly flow & wildcard flow)

Sends commands to OVS daemonExample: MAC tables

Configures and monitors the OVS kernel module

Page 31: Kubernetes networking in AWS

OpenShift SDN – L3 routing

1. OVS doesn’t support native L3 routing

2. L3 routing between two subnets done by the parent host’s networking stack

3. Steps (one alternative)1. Creating two per-VLAN

OVS2. Creating two L3 sub-

interfaces on the parent host

3. Bridging the two sub-interfaces to both OVS bridges

4. Activating IP forwarding on the parent host

Eth0.10

Eth0.20

Note: The L3 routing can be done using plugins such as Flannel, Weave and others

Page 32: Kubernetes networking in AWS

10.1.0.2

POD PAUSE

Container

Container

localhost

veth

eth0

OpenShift SDN – local bridges and interfaces on the host

1. Node is registered and given a subnet

2. Pod crated by OpenShiftand given IP from Docker bridge

3. Then moved to OVS

4. Container created by Docker engine given IP from Docker bridge

5. Stays connected to lbr0

6. No network duplication –Docker bridge only for IPAM

Lbr0 - Docker bridge Br0 - OVS

10.1.0.1 - GW

IPAM

tun0vxlan0

port1 port2

Container

eth0

10.1.0.1 - GW

OpenShiftschedules

pod

DockerSchedules

pod

10.1.0.3

vlinuxbr vovsbr

port3

10.1.0.2

POD PAUSE

Container

Container

localhost

veth

eth0

Page 33: Kubernetes networking in AWS

OpenShift SDN – Overlay

• Control plane – etcdstores information related to host subnets

• Initiated from the node’s OVS via the nodes NIC (vtep – lbr0)

• Traffic encapsulated into OVS’s VXLAN interface

• When ovs-multitenantdriver used – projects can be identified by VNIDs

• Adds 50 bytes to the original frame

UDP/4789

SRC – 10.15.0.1

DST – 10.15.0.2

14 Bytes 20 Bytes

8 Bytes 8 Bytes

Node110.1.0.0/24

BR0 (OVS) VTEP

Dmz pod10.1.0.2

Inner pod10.1.0.3

Node210.1.1.0/24

BR0 (OVS) VTEP

Dmz pod10.1.1.2

Inner pod10.1.1.3

8 Bytes

Master

etcd

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

Page 34: Kubernetes networking in AWS

OpenShift SDN – plugin option1

OVS-Subnet – the original driver

• Creates flat network allows all pod to inter-communicate

• No network segmentation

• Policy applied on the OVS

• No significance to project membership Node1

10.1.0.0/24

BR0 (OVS) VTEP

Dmz pod10.1.0.2

Inner pod10.1.0.3

Node210.1.1.0/24

BR0 (OVS) VTEP

Dmz pod10.1.1.2

Inner pod10.1.1.3

VXLAN

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Page 35: Kubernetes networking in AWS

OpenShift SDN – plugin option 2

OVS-Multitenant –

• Each projects gets a unique VNID – identifies pods in that project

• Default projects – VNID 0 – communicate with all others (Shared services)

• Pods’ traffic inspected according to its project membership

Node110.1.0.0/24 BR0 (OVS)

VTEP

Dmz pod10.1.0.2

Inner pod10.1.0.3

Node210.1.1.0/24

BR0 (OVS) VTEP

Dmz pod10.1.1.2

Inner pod10.1.1.3

VXLAN

Project AVnid 221

Project Bvnid 321

Project AVnid 221

Project Bvnid 321

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Page 36: Kubernetes networking in AWS

OpenShift – service discovery - alternativesApp to App – preferably using Pod-to-Service, avoid Pod-to-Pod

Environment variables –

• Injected to the pod with connectivity info (user names, service IP..)

• For updates, pod recreation is needed

• Destination service must first created (or restarted in case they were created before the pod)

• Not a real dynamic discovery…

DNS – SkyDNZ – serving <cluster>.local suffixes

• Split DNS - supports different resolution for internal and external

• SkyDNS installed on master and pods are configured by default to use it first

• Dynamic - no need to recreate the pods for any service update

• Newly created services being detected automatically by the DNS

• For direct pod-to-pod connection (no service) – DNS round robin can be used

Page 37: Kubernetes networking in AWS

VPC

OpenShift – Internal customer services consumption

Yum repos, Docker registry, SMTP, LDAP

Leveraging the VPN tunnels from AWS to the customer DMZ

1. The node connects to the requested service proxy in the customer DMZ

2. The proxy initiates the request for the service sources by its own IP – allowed by customer firewall

OpenShift node

Eth0 -10.15.0.1

Service proxy

Internet

Virtual Private GW

Customer DMZ

CustomerGW

Customer LAN

Docker Reg, Repos, LDAP, SMTP

Page 38: Kubernetes networking in AWS

Routing and Load balancingRequirements

Network discovery

Alternatives

Page 39: Kubernetes networking in AWS

10.1.0.0/16

Routing – alternative 1OpenShift router –

• For WEB apps – http/https

• Managed by users

• Routes created in project level and added to the router

• Unless shared, all routers see all routes

• For traffic to come in, admin needs to add a DNS record for the router or using wildcard

• Default - Haproxy container - listens on the host’s IP and proxies traffic to the pods

Master – AWS instance Node2 – AWS

instance

Node1 – AWS instance

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

Haproxyrouter

https://service.aws.com:8080

Serviceweb-srv1:80

Serviceweb-srv2:80

DNS

Page 40: Kubernetes networking in AWS

Routing – alternative 2Standalone Load Balancer –

• All traffic types

• Alternatives are:1. AWS ELB

2. Dedicated Cluster node

3. Customized Haproxy pod

• IP Routing towards the internal cluster network – discussed later

AWS ELB

1- service.aws.com:80

10.1.0.0/16

Master – AWS instance Node2 – AWS

instance

Node1 – AWS instance

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

Serviceweb-srv1:80

Serviceweb-srv2:80

DNS

Haproxypod

3 - service.aws.com:80

Node3 –Haproxy

2- service.aws.com:80

Page 41: Kubernetes networking in AWS

10.1.0.0/16

Routing – alternative 3Service external IP–

• Managed by Kubernetes Kube-Proxy service on each node

• The proxy assigns the IP/port and listens to incoming connections

• Redirects traffic to the pods

• All types of traffic

• Admin should take care of routing traffic towards the node• Iptables-based – all pods

should be ready listen• User space - try all pods till it

finds

Master – AWS instance Node2 – AWS

instance

Node1 – AWS instance

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

service.aws.com:80

podweb-srv1:80

podweb-srv2:80

DNS

Kube-proxy service

Page 42: Kubernetes networking in AWS

10.1.0.0/16

OpenShift@AWS – LB routing to cluster networkConcern – network routing towards the cluster network

Option 1 – AWS ELB

1. Forwards to the OpenShiftnode’s IP using port mapping

2. Need application ports coordination - Manageability issues

3. Excessive IPtables for port mapping manipulation – prone to errors

4. Dependency on AWS services

Master – AWS instance

Node2 – AWS instance

Node1 – AWS instance

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

ELB

https://service.aws.com:8080

:8080

IPtables

Serviceweb-srv1:80

Serviceweb-srv2:80

Page 43: Kubernetes networking in AWS

10.1.0.0/16

OpenShift@AWS – LB routing to the cluster network

Concern – network routing towards the cluster network

Option 2 – Tunneling

1. Tunnel the external Haproxynode to the cluster via a ramp node

2. Required extra configuration –complexity

3. Extra tunneling –performance issues

4. You need this instance to be continuously up - costly

5. AWS independency

Master – AWS instance

Node2 – AWS instance

Node1 – AWS instance

192.168.0.2/30

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

Haproxy

https://service.aws.com:8080

192.168.0.1/30

Route to 10.1.0.0/16 via 192.168.0.2

Serviceweb-srv1:80

Serviceweb-srv2:80

Page 44: Kubernetes networking in AWS

10.1.0.0/16

OpenShift@AWS – LB routing to the cluster networkConcern – network routing towards the cluster network

Option 3 – Haproxy move to cluster

1. Put the LB in a LB-only cluster node - disable scheduling

2. Service URL resolved to the node’s IP

3. Full routing knowledge of the cluster

4. Simple and fast – native routing

5. AWS independency

6. You need this instance to be continuously up – costly

Master – AWS instance

Node2 – AWS instance

Haproxy Node –AWS instance

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

https://service.aws.com:8080

Serviceweb-srv1:80

Serviceweb-srv2:80

Page 45: Kubernetes networking in AWS

10.1.0.0/16

OpenShift@AWS – LB routing to the cluster networkConcern – network routing towards the cluster network

Option 4 – Haproxy container

1. Create Haproxy container

2. Service URL resolved to the container’s IP

3. Full routing knowledge of the cluster

4. AWS independency

5. Use cluster overlay network – native

6. Overlay network - being used anyway

Master – AWS instance

Node2 – AWS instance

Node1 – AWS instance

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

Haproxypod

https://service.aws.com:8080

Serviceweb-srv1:80

Serviceweb-srv2:80

Eth0 – 10.1.0.20

Page 46: Kubernetes networking in AWS

Network SecurityRequirements

Alternatives

Networking solutions capabilities

Page 47: Kubernetes networking in AWS

Customer

AWS

AWS level resource access

• Creating VPC, instance, network services, storage services…..

• Requires AWS AAA

• Managed only by AWS

OS level access

• SSH or RDP to the instance’s OS…..

• Requires OS level AAA or certificates

• Managed by the customer

ELB, Lambda and application related services

are optional.Not considered

part of the shared trust

model

Shared Security responsibility Model

Page 48: Kubernetes networking in AWS

Intra-pod micro-segmentation• For some reason, someone put

containers with different sensitivity level within the same pod

• OVS uses IP/MAC/OVS port for policy (pod only attributes)

• No security policy or network segmentation applied to intra-pod containers

• Limited connections or TCP ports blocks - tweaks that won’t help to deal with the newly discovered exploit

• “Security Contexts” feature doesn’t apply to intra-pod security but to the pod level

• It should be presumed that containers in a pod share the same security level!

POD PAUSE

Container Container

TAP

POD PAUSE

DMZ DRMS

TAP

10.10.10.1 publicly exposed10.10.10.2

SDN Controller

localhost localhost

Compromised

BR-ETH

Attacker access the DMZ service’s public IP

DMZ service

From github’s pod-security-context project page:“We will not design for intra-pod security; we are not currently concerned about isolating containers in the same pod from one another”

Page 49: Kubernetes networking in AWS

Three options:

1. Separated clusters – DMZ and SECURE – different networks – implicit network segmentation – expensive but simple for short term

2. Separated nodes same cluster – DMZ nodes and SECURE nodes –applying access control using security groups – communication freely allowed across the cluster – doesn’t give real segmentation with ovs-subnets

3. Using OpenShift’s ovs-multitenant driver – gives micro-segmentation using projects' VNIDs

OpenShift SDN – network segmentation

Page 50: Kubernetes networking in AWS

Option 1 - Cluster level segmentation

K8S Secure cluster10.1.0.0/16

Node2 -internal

Node1 -internal

Eth0 - 10.15.0.3

App1 10.1.0.1

Eth0 - 10.15.0.2

Master

etcd

App2 10.1.1.1

Only specific port allowed

K8S Exposed cluster10.1.0.0/16

Node2 - DMZ

Node1 - DMZ

Eth0 - 10.15.0.3

App1 10.1.0.1

Eth0 - 10.15.0.2

VPC Network

Master

etcd

App2 10.1.1.2

Internet

Security groupAllows 33111

No shared service discovery knowledgeNo Network visibility of addresses and ports

Lots of barriers and dis-information about the cluster services

Page 51: Kubernetes networking in AWS

Option 2 - Node segregated – same cluster1. Exposed app1 has been

compromised

2. Cluster service discovery may be queried for various cluster networking knowledge – IPs, ports, services, pods

3. Other pods and services viability, port scanning can be invoked and exploited

4. Other sensitive apps might be harmed

The cluster gives the knowledge freedom and tools for further hacking actions

K8S cluster10.1.0.0/16

Node2 -internal

Node1 - DMZ

Eth0 - 10.15.0.3

App1 10.1.0.1

Eth0 - 10.15.0.2

VPC Network

Master

etcd

Service discovery – full cluster knowledge

App2 10.1.1.1

Node2 -internal

Eth0 - 10.15.0.3

App3 10.1.1.1

Internet

Security groupAllows 33111

Page 52: Kubernetes networking in AWS

OpenShift SDN – network segmentation

OVS-Subnet plugin

• All projects are labeled with VNID 0 So they allowed to communicate with all other pods and services

• No network segmentation

• Other filter mechanism required:OVS flows, Iptables, micro-segmentation solutions

Node210.15.0.3 BR0 (OVS)

VTEP

Node110.15.0.2

BR0 (OVS) VTEP

VXLAN

dmz inner

dmz inner

DMZ Service

VPC network

Inner Service

Br0 allows the traffic

Page 53: Kubernetes networking in AWS

Node110.15.0.210.1.0.0/24

Node210.15.0.310.1.0.1/24

OpenShift SDN – network segmentation

ovs-multitenant SDN plugin

OpenShift default project – VNID 0 –Allows access to/from all

All non-default projects – given unique VNID in case they are not joined together

Pods – get their network association according to their project membership

Pod can access another pod or service only if they belong to the same VNID otherwise OVS blocks them

Project AVnid 221

BR0 (OVS) VTEP

Project AVnid 321

VPC network

Project Avnid221

BR0 (OVS) VTEP

Project AVnid 321

VXLAN

Dmzpod

Inner pod

Dmzpod

Inner pod

Etcd -Project to vnid

mappingNetwork

namespace

Page 54: Kubernetes networking in AWS

Node210.15.0.3

Controlled isolation - Egress Router

• A privileged pod

• Redirects pod’s traffic to a specified external server when it allows connections from specific sources

• Can be called via a K8S service

• Forwards the traffic outside using its own private IP then it gets NATed by the node

• Steps –• Creates MACVLAN interface on

the primary node interface• Moves this interface to the

egress router pod’s namespaceProject A VNID 221

BR0 (OVS) VTEP

Project A VNID 321Dmz

podInner pod

Server234.123.23.2

VPC network

internetCustomer

DMZ

Server

Egress router pod

Eth0.3010.15.0.20

SRC 10.1.0.2DST 234.123.23.2

Network namespace

SRC 10.15.0.20DST 234.123.23.2

Etcd -Project to vnid

mapping

Page 55: Kubernetes networking in AWS

Node210.15.0.3

Controlled isolation – Gateway pod

• Created on the project level

• Can be applies only to isolated pods (not default or joined)

• Can be used to open specific rules to an isolated app

• If pod needs access to specific service belongs to different project, you may add EgressNetworkPolicy to the source pod’s project

Project A VNID 221

BR0 (OVS) VTEP

Project A VNID 321Dmz

podInner pod

VPC network

internet

Server

HAPROXY/FWvnid0

Eth0.3010.15.0.20

SRC 10.1.0.2DST 234.123.23.2

Network namespace

SRC 10.15.0.20DST 234.123.23.2

Etcd -Project to vnid

mapping

Page 56: Kubernetes networking in AWS

10.1.0.0/16

OpenShift SDN – L3 segmentation use case

1. Secure and DMZ subnets

2. Pods scheduled to multiple hosts and connected to subnets according to their sensitivity level

3. Another layer of segmentation

4. More “cloudy” method as all nodes can be scheduled equally with all types of PODs

5. Currently doesn’t seem to be natively supported

6. Nuage plugin supports this

Master

Node2

Node1

Eth0 - 10.15.0.2

Eth0 - 10.15.0.3

Eth0 - 10.15.0.1

secure pod1

secure pod2DMZ pod2

Secure Service

DMZ Service

DMZ pod1

Internet

LB10.1.0.0/24 - Secure10.1.1.0/24 - DMZ

10.1.0.0/24 - Secure10.1.1.0/24 - DMZ

Page 57: Kubernetes networking in AWS

AWS security groups inspectionConcern – Users may attach permissive security groups to instances

Q - Security group definition – manual or automatic?

A - Proactive way –

• Wrapping security checks into continuous integration

• Using subnet-level Network ACLfor more general deny rules –allowed only to sec admins

• Using third party tools: Dome9..

A - Reactive way –

Using tools such as aws_recipes and Scout2 (nccgroup) to inspect

Lots to be discussed

Region

AZ

AZ

VPC

Security Group

Security Group Routing Table

Instance

Instance

Internet

GW

VPC router

Subnet

Virtual Private GW

Internet

Elastic IP

Corporate

Instance

Eni (interface)

CustomerGW

EniInstance Subnet

Network ACL

Admin controlUser control

Page 58: Kubernetes networking in AWS

Questions