What's new in Kubernetes

  • View
    1.051

  • Download
    2

  • Category

    Software

Preview:

Citation preview

Google Cloud Platform

What’s new in KubernetesDocker & Bay Area OpenSource meetup

February 16, 2016

Daniel Smith <dbsmith@google.com>Senior Software Engineer

Google Cloud Platform

Kubernetes

Greek for “Helmsman”; also the root of the words “governor” and “cybernetic”

• Runs and manages containers

• Inspired and informed by Google’s experiences and internal systems

• Supports multiple cloud and bare-metal environments

• Supports multiple container runtimes

• 100% Open source, written in Go

Manage applications, not machines

Google Cloud Platform

Google has been developing and using containers to manage applications forover 10 years.

Images by Connie Zhou

Google Cloud Platform

Review: What’s old in Kubernetes?

Google Cloud Platform

kubelet

UI

kubelet CLI

API

users master nodes

The 10000 foot view

etcd

kubelet

scheduler

controllers

apiserver

Google Cloud Platform

Pods

Google Cloud Platform

Pods

Small group of containers & volumes

Tightly coupled

The atom of scheduling & placement

Shared namespace• share IP address & localhost• share IPC, etc.

Managed lifecycle• bound to a node, restart in place• can die, cannot be reborn with same ID

Example: data puller & web server

ConsumersContent Manager

File Puller

Web Server

Volume

Pod

Google Cloud Platform

Volumes

Very similar to Docker’s concept

Pod scoped storage

Share the pod’s lifetime & fate

Support many types of volume plugins• Empty dir (and tmpfs)• Host path• Git repository• GCE Persistent Disk• AWS Elastic Block Store• Azure File Storage• iSCSI• Flocker

• NFS• GlusterFS• Ceph File and RBD• Cinder• FibreChannel• Secret, ConfigMap, DownwardAPI• Flex (exec a binary)• ...

Google Cloud Platform

ReplicationControllers

Google Cloud Platform

ReplicationControllers

A simple control loop

Runs out-of-process wrt API server

Has 1 job: ensure N copies of a pod• if too few, start some• if too many, kill some• grouped by a selector

Cleanly layered on top of the core• all access is by public APIs

Replicated pods are fungible• No implied order or identity

ReplicationController- name = “my-rc”- selector = {“App”: “MyApp”}- podTemplate = { ... }- replicas = 4

API Server

How many?

3

Start 1 more

OK

How many?

4

Google Cloud Platform

Services

Google Cloud Platform

Services

A group of pods that work together• grouped by a selector

Defines access policy• “load balanced” or “headless”

Gets a stable virtual IP and port• sometimes called the service portal• also a DNS name

VIP is managed by kube-proxy• watches all services• updates iptables when backends change

Hides complexity - ideal for non-native apps

Client

Virtual IP

Google Cloud Platform

External Services

Services IPs are only available inside the cluster

Need to receive traffic from “the outside world”

Builtin: Service “type”• NodePort: expose on a port on every node• LoadBalancer: provision a cloud load-balancer

DiY load-balancer solutions• socat (for nodePort remapping)• haproxy• nginx

Google Cloud Platform

What’s new in Kubernetes?

Google Cloud Platform

Ingress (L7)

Services are assumed L3/L4

Lots of apps want HTTP/HTTPS

Ingress maps incoming traffic to backend services

• by HTTP host headers• by HTTP URL paths

HAProxy, NGINX, AWS and GCE implementations in progress

Now with SSL!

Status: BETA in Kubernetes v1.2

URL Map

Client

Service-foo: 10.0.0.1 Service-bar 10.0.0.2

api.company.com24.7.8.9

http://api.company.com/foo http://api.company.com/bar

Ingress API Ingress (L7)

apiVersion: extensions/v1beta1kind: Ingressmetadata: name: testspec: rules: - host: k8s.io http: paths: - path: /foo backend: serviceName: fooSvc servicePort: 80 - path: /bar backend: serviceName: barSvc servicePort: 80

fooSvc barSvc

http://k8s.io/foo http://k8s.io/bar

Ingress (L7)

apiVersion: extensions/v1beta1kind: Ingressmetadata: name: testspec: rules: - host: asdf.io http: paths: - backend: serviceName: qwertySvc servicePort: 80 - host: aoeu.io http: paths: - backend: serviceName: dvorakSvc servicePort: 80

qwertySvc dvorakSvc

http://asdf.io/* http://aoeu.io/*

Ingress (L7)

Ingress Object Ingress Controller● GCE● HAProxy● ...

Ingress (L7)

Google Cloud Platform

kube-proxy

Google Cloud Platform

iptables kube-proxy

iptables

kube-proxy apiserverNode X

Google Cloud Platform

iptables

kube-proxy apiserverNode X

watch

services & endpoints

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

kubectl run ...

watch

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

schedule

watch

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

watch

kubectl expose ... iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

new service!

update

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

watch

configure

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

watch

VIP

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

new endpoints!

update

VIP

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

VIP

watch

configure

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

VIP

watch

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

VIP

watch

Client

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

VIP

watch

Client

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

VIP

watch

Client

iptables kube-proxy

Google Cloud Platform

iptables

kube-proxy apiserverNode X

VIP

watch

Client

iptables kube-proxy

iptables kube-proxy

Google Cloud Platform

ConfigMaps (and Secrets)

Google Cloud Platform

ConfigMaps

Problem: how to manage app configuration• ...without making overly-brittle container images

12-factor says config comes from the environment

• Kubernetes is the environment

Manage config via the Kubernetes API

Inject config as a virtual volume into your Pods• late-binding, live-updated (atomic)• also available as env vars

Status: GA in Kubernetes v1.2

node

API

Pod ConfigMap

Google Cloud Platform

Secrets

Problem: how to grant a pod access to a secured something?

• don’t put secrets in the container image!

12-factor says config comes from the environment

• Kubernetes is the environment

Manage secrets via the Kubernetes API

Inject secrets as virtual volumes into your Pods• late-binding, tmpfs - never touches disk• also available as env vars

node

API

Pod Secret

Google Cloud Platform

Rolling updates

Google Cloud Platform

Rolling Updates

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v1

Service- app: MyApp

Google Cloud Platform

Rolling Updates

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v1

Service- app: MyApp

# Update pods of frontend-v1 using new replication controller data in frontend-v2.json.$ kubectl rolling-update frontend-v1 -f frontend-v2.json

# Update pods of frontend-v1 using JSON data passed into stdin.$ cat frontend-v2.json | kubectl rolling-update frontend-v1 -f -

# Update the pods of frontend-v1 to frontend-v2 by just changing the image, and switching the# name of the replication controller.$ kubectl rolling-update frontend-v1 frontend-v2 --image=image:v2

# Update the pods of frontend by just changing the image, and keeping the old name$ kubectl rolling-update frontend --image=image:v2

Google Cloud Platform

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 0- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 1- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 2- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 1- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 2- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 2- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 1- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 2- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 1- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 0- selector:

- app: MyApp- version: v1

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

ReplicationController- replicas: 3- selector:

- app: MyApp- version: v2

Service- app: MyApp Rolling Updates

Google Cloud Platform

Deployments

Google Cloud Platform

Deployments

Rolling update is too imperative

Deployment manages RC changes for you• stable object name• updates are done server-side rather than client• kubectl edit or kubectl apply is all you need

Aggregates stats

Can have multiple updates in flight

Status: BETA in Kubernetes v1.2 ...

Google Cloud Platform

Deployments

...

apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: nginx-deploymentspec: replicas: 3 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80

Google Cloud Platform

Jobs

Google Cloud Platform

Jobs

Run-to-completion, as opposed to run-forever• Express parallelism vs. required completions• Workflow: restart on failure• Build/test: don’t restart on failure

Aggregates success/failure counts

Built for batch and big-data work

Status: GA in Kubernetes v1.2

...

Start Finish

apiVersion: extensions/v1beta1kind: Jobmetadata: name: ffmpegspec: selector: matchLabels: app: ffmpeg template: metadata: labels: app: ffmpeg spec: containers: - name: ffmpeg image: ffmpeg restartPolicy: OnFailure

Jobs

Start Finish

apiVersion: extensions/v1beta1kind: Jobmetadata: name: ffmpegspec: selector: matchLabels: app: ffmpeg # run 5 times before done completions: 5

...

Jobs

Start Finish

apiVersion: extensions/v1beta1kind: Jobmetadata: name: ffmpegspec: selector: matchLabels: app: ffmpeg # run 5 times before done completions: 5 parallelism: 2...

Jobs

Google Cloud Platform

DaemonSets

Google Cloud Platform

DaemonSets

Problem: how to run a Pod on every node• or a subset of nodes

Similar to ReplicationController• principle: do one thing, don’t overload

“Which nodes?” is a selector

Use familiar tools and patterns

Status: BETA in Kubernetes v1.2

Pod

Google Cloud Platform

Graceful Termination

Google Cloud Platform

Graceful Termination

Give pods time to clean up• finish in-flight operations• log state• flush to disk• 30 seconds by default

Catch SIGTERM, cleanup, exit ASAP

Pod status “Terminating”

Declarative: ‘DELETE’ manifests as an object field in the API

Google Cloud Platform

HorizontalPodAutoscalers

Google Cloud Platform

HorizontalPodAutoScalers

Automatically scale ReplicationControllers to a target utilization

• CPU utilization for now• Probably more later

Operates within user-defined min/max bounds

Set it and forget it

Status: GA in Kubernetes v1.2

...

Stats

Google Cloud Platform

Cluster Auto-Scaling

Google Cloud Platform

Cluster Scaling

Add nodes when needed• e.g. CPU usage too high• nodes self-register with API server

Remove nodes when not needed• e.g. CPU usage too low

Status: Works on GCE, need other implementations

...

Google Cloud Platform

New and coming soon

• Cron (scheduled jobs)• Custom metrics• “Apply” a config (even more declarative)• Interactive containers• Bandwidth shaping• Third-party API objects• Scalability: 1000 nodes, 100+ pods/node• Performance• Machine-generated Go clients (less deps!)• Volume usage stats• Multi-zone (AZ) support• Multi-scheduler support• Node affinity and anti-affinity

• Multi-cluster federation• API federation• More volume types• Private Docker registry• External DNS integration• Volume classes and auto-provisioning• Node fencing• DiY Cloud Provider plugins• More container runtimes (e.g. Hyper)• Better auth{n,z}• Network policy (micro-segmentation)• Big data integrations• Device scheduling (e.g. GPUs)

Google Cloud Platform

Kubernetes status & plans

Open sourced in June, 2014• v1.0 in July, 2015• v1.1 in November, 2015• v1.2 ... soon!

Google Container Engine (GKE)• hosted Kubernetes - don’t think about cluster setup

PaaSes:• RedHat OpenShift, Deis, Stratos

Distros:• CoreOS Tectonic, Mirantis Murano (OpenStack),RedHat

Atomic, Mesos

Hitting a ~3 month release cadence

Google Cloud Platform

The Goal: Read-write open source

Containers are a new way of working

Requires new concepts and new tools

Google has a lot of experience...

...but we are listening to users!

Your input does make a difference!

The Goal: Read-write open source

The Goal: Read-write open source

Google Cloud Platform

Kubernetes is Open- open community- open design- open source- open to ideas

http://kubernetes.iohttps://github.com/kubernetes/kubernetes

slack: kubernetestwitter: @kubernetesio

Recommended