Upload
bret-mcgowen
View
633
Download
2
Embed Size (px)
Citation preview
Containing Container Chaos with Kubernetes
Bret McGowenGoogle@bretmcg
Carter MorganGoogle@_askcarter
Workshop setup: http://github.com/bretmcg/kubernetes-workshop
2@kubernetesio @bretmcg @_askcarter
Agenda09:00 - 10:30 Containers and Kubernetes overview
10:30 - 10 :45 - BREAK
10:45 - 12:00 - Kubernetes 101
12:00 - 01:00 - Lunch!
01:00 - 02:30 - Kubernetes in Production
02:30 - 02:45 - BREAK
02:45 - 04:00 - Kubernetes in Production, cont’d
33
What’s in this for you...
44
Let's go back in time...
5
Shared machines Chroots, ulimits, and nice
Noisy neighbors: a real problemLimited our ability to share
The fleet got largerInefficiency hurts more at scale
Share harder!
ca. 2002 App-specific machine poolsInefficient and painful to manage
Good fences make good neighbors
6
Everything we do is about isolation
Namespacing is secondaryc.f. github.com/google/lmctfy
We evolved our system, made mistakes, learned lessons
Docker
The time is right to share our experiences, and to learn from yours
ca. 2006 Google developed cgroupsInescapable resource isolationEnables better sharing
7
job hello_world = {
runtime = { cell = 'ic' } // Cell (cluster) to run in
binary = '.../hello_world_webserver' // Program to run
args = { port = '%port%' } // Command line parameters
requirements = { // Resource requirements
ram = 100M
disk = 100M
cpu = 0.1
}
replicas = 5 // Number of tasks
}
10000
Borg - Developer View
8
web browsers
BorgMaster
link shard
UI shardBorgMaster
link shard
UI shardBorgMaster
link shard
UI shardBorgMaster
link shard
UI shard
Scheduler
borgcfg web browsers
scheduler
Borglet Borglet Borglet Borglet
Config file
BorgMaster
link shard
UI shard
persistent store (Paxos)
Binary
Borg
What justhappened?
9
Hello world!
Hello world!
Hello world!
Hello world!Hello
world! Hello world! Hello
world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world!Hello world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world! Hello
world!
Hello world!
Hello world!
Hello world!
Image by Connie Zhou
Hello world!
Hello world!
Hello world! Hello
world!
Hello world! Hello
world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world! Hello
world!
Hello world! Hello
world!
Hello world!
Hello world!
Hello world!
Hello world!
Hello world! Hello
world!
Hello world! Hello
world!
Hello world!
Hello world!
10
Developer View
11
Data center as one machineMachines are just resource boundaries
12@kubernetesio @bretmcg @_askcarter
The App (Monolith)
nginx
monolith
13@kubernetesio @bretmcg @_askcarter
The App (Microservices)
nginx
helloauth
1414
Containers
15@kubernetesio @bretmcg @_askcarter
Old Way: Shared Machines
No isolation
No namespacing
Common libs
Highly coupled apps and OS
kernel
libs
app
app app
app
16@kubernetesio @bretmcg @_askcarter
Old Way: Virtual Machines
Some isolation
Inefficient
Still highly coupled to the guest OS
Hard to manage app
libskernel
libs
app app
kernel
app
libs
libskernel
kernel
17@kubernetesio @bretmcg @_askcarter
New Way: Containers
libs
app
kernel
libs
app
libs
app
libs
app
18@kubernetesio @bretmcg @_askcarter
But what ARE they?
Containers share the same operating system kernel
Container images are stateless and contain all dependencies▪ static, portable binaries▪ constructed from layered filesystems
Containers provide isolation (from each other and from the host) Resources (CPU, RAM, Disk, etc.) Users Filesystem Network
19
Why containers?
• Performance• Repeatability• Isolation• Quality of service• Accounting• Portability
A fundamentally different way of managing applications
late binding vs. early binding
Images by Connie Zhou
2020
Packaging and Distributing Apps demo
2121
LabWorkshop setupandContainerizing your applicationhttp://github.com/bretmcg/kubernetes-workshop
2222
But that's just one machine!
Discovery
ScalingSecurity
Monitoring Configuration
SchedulingHealth
23
https://www.flickr.com/photos/greeblie/2224507899
We’ve been there...
23
Now that we have containers...Isolation: Keep jobs from interfering with each other
Scheduling: Where should my job be run?
Lifecycle: Keep my job running
Discovery: Where is my job now?
Constituency: Who is part of my job?
Scale-up: Making my jobs bigger or smaller
Auth{n,z}: Who can do things to my job?
Monitoring: What’s happening with my job?
Health: How is my job feeling?
25@kubernetesio @bretmcg @_askcarter
Kubernetes
Manage applications, not machines
Open source, container orchestrator
Supports multiple cloud and bare-metal environments
Inspired and informed by Google’s experiences and internal systems
Design principles
Declarative > imperative: State your desired results, let the system actuate
Control loops: Observe, rectify, repeat
Simple > Complex: Try to do as little as possible
Modularity: Components, interfaces, & plugins
Legacy compatible: Requiring apps to change is a non-starter
Network-centric: IP addresses are cheap
No grouping: Labels are the only groups
Bulk > hand-crafted: Manage your workload in bulk
Open > Closed: Open Source, standards, REST, JSON, etc.
2727
Kubernetes Made Easy demo
2828
Pods
29@kubernetesio @bretmcg @_askcarter
PodsLogical Application
Pod
30@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
Pod
31@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
Pod
nginx
monolith
32@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
and volumes
Pod
nginx
monolith
33@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
and volumes
Pod
nginx
monolith
NFSiSCSIGCE
34@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
and volumes• Shared namespaces
Pod
nginx
monolith
NFSiSCSIGCE
35@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
and volumes• Shared namespaces• One IP per pod
Pod
nginx
monolith
NFSiSCSIGCE
10.10.1.100
36@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
and volumes• Shared namespaces• One IP per pod
Pod
nginx
monolith
NFSiSCSIGCE
10.10.1.100
3737
LabCreating and managing podshttp://github.com/bretmcg/kubernetes-workshop
3838
Health checks
39@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet PodPodapp v1
40@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Hey, app v1... You alive?
Node
Kubelet Podapp v1app v1
41@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Nope!Pod
app v1app v1
42@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
OK, then I’m going to restart you...
Node
Kubelet Podapp v1app v1
43@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Pod
44@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Podapp v1
45@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet
Hey, app v1... You alive?
Podapp v1
46@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Yes!Pod
app v1
47@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Node
Kubelet Podapp v1
4848
LabMonitoring and health checkshttp://github.com/bretmcg/kubernetes-workshop
4949
Secrets
50@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
Kubeletsecret
$ kubectl create secret generic tls-certs --from-file=tls/
51@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
Kubeletpod
$ kubectl create -f pods/secure-monolith.yaml
52@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
KubeletAPI
Server
Node
Kubelet Pod
Pod
53@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
KubeletAPI
Server
Node
Kubelet Pod
Podsecret
54@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
KubeletAPI
Server
Node
Kubelet Pod
Pod
/etc/tls
secret
55@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
Kubelet
Node
Kubelet Pod
Pod
/etc/tls/etc/tls
10.10.1.100
secret
API Server
56@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
KubeletAPI
Server
Node
Kubelet Pod
Pod
/etc/tls
nginx
10.10.1.100
secret
5757
LabManaging application configurations and secretshttp://github.com/bretmcg/kubernetes-workshop
5858
Services
59@kubernetesio @bretmcg @_askcarter
Services
Node1 Node3Node2
Podhello
Service
Podhello
Podhello
60@kubernetesio @bretmcg @_askcarter
ServicesPersistent Endpoint for Pods
Node1 Node3Node2
Podhello
Service
Podhello
Podhello
61@kubernetesio @bretmcg @_askcarter
Services
Node1 Node3Node2
Podhello
Service
Podhello
Podhello
Persistent Endpoint for Pods• Use Labels to
Select Pods
62@kubernetesio @bretmcg @_askcarter
LabelsArbitrary meta-data attached to Kubernetes object
Pod
hello
Pod
hello
labels: version: v1 track: stable
labels: version: v1 track: test
63@kubernetesio @bretmcg @_askcarter
Labelsselector: “version=v1”
Pod
hello
Pod
hello
labels: version: v1 track: stable
labels: version: v1 track: test
64@kubernetesio @bretmcg @_askcarter
Labelsselector: “track=stable”
Pod
hello
Pod
hello
labels: version: v1 track: stable
labels: version: v1 track: test
65@kubernetesio @bretmcg @_askcarter
ServicesPersistent Endpoint for Pods• Use Labels to
Select Pods• Internal or
External IPsNode1 Node3Node2
Podhello
Service
Podhello
Podhello
6666
LabCreating and managing serviceshttp://github.com/bretmcg/kubernetes-workshop
6767
Recap
68@kubernetesio @bretmcg @_askcarter
Kubernetes
Manage applications, not machines
Open source, container orchestrator Supports multiple cloud and bare-metal
environments
Inspired and informed by Google’s experiences and internal systems
69@kubernetesio @bretmcg @_askcarter
machine-1
machine-2
machine-3
frontend middleware backend
Physical Infrastructure
70@kubernetesio @bretmcg @_askcarter
frontend
middleware
backend
Kubernetes API: Unified Compute Substrate
Logical Infrastructure
71@kubernetesio @bretmcg @_askcarter
Goal: Write once, run anywhere*
Don’t force apps to know about concepts that are cloud-provider-specific
Examples of this:● Network model● Ingress● Service load-balancers● PersistentVolumes
* approximately
Workload Portability
72@kubernetesio @bretmcg @_askcarter
Top 0.01% of all GitHub projects
1200+ externalprojects based on
k8s
Companies Contributing
Companies Using
690+unique contributors
Community
73@kubernetesio @bretmcg @_askcarter
PodsLogical Application• One or more containers
and volumes• Shared namespaces• One IP per pod
Pod
nginx
monolith
NFSiSCSIGCE
10.10.1.100
74@kubernetesio @bretmcg @_askcarter
Monitoring and Health Checks
Hey, app v1... You alive?
Node
Kubelet Podapp v1app v1
75@kubernetesio @bretmcg @_askcarter
Secrets and Configmaps
Kubernetes Master
etcdAPI
Server
Node
Kubeletsecret
$ kubectl create secret generic tls-certs --from-file=tls/
76@kubernetesio @bretmcg @_askcarter
ServicesPersistent Endpoint for Pods• Use Labels to
Select Pods• Internal or
External IPsNode1 Node3Node2
Podhello
Service
Podhello
Podhello
77@kubernetesio @bretmcg @_askcarter
LabelsArbitrary meta-data attached to Kubernetes object
Pod
hello
Pod
hello
labels: version: v1 track: stable
labels: version: v1 track: test
Kubernetes in Production
7979
Deployments
80@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 1
81@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 3
82@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 3
Podhello
Podhello
83@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 3
Podhello
84@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 3
Podhello
Podhello
85@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 3
Podhello
Podhello
Podhello
86@kubernetesio @bretmcg @_askcarter
Drive current state towards desired stateDeployments
Node1 Node2 Node3
Podhello
app: helloreplicas: 3
Podhello
Podhello
8787
LabCreating and managing deploymentshttp://github.com/bretmcg/kubernetes-workshop
8888
Rolling Updates
89@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghostPod
app v1
Service
ghost
Podapp v1
Podapp v1
90@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghostPod
app v1
Service
ghost
Podapp v1
Podapp v1
Podapp v2
91@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghostPod
app v1
Service
ghost
Podapp v1
Podapp v1
Podapp v2
92@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
ghostPod
app v1
Service
ghost
Podapp v1
Podapp v1
Podapp v2
93@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Podapp v1
Podapp v1
Podapp v2
94@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Podapp v1
Podapp v1
Podapp v2
Podapp v2
95@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Podapp v1
Podapp v1
Podapp v2
Podapp v2
96@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
ghost
Podapp v1
Podapp v1
Podapp v2
Podapp v2
97@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Podapp v1
Podapp v2
Podapp v2
98@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Podapp v1
Podapp v2
Podapp v2
Podapp v2
99@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Podapp v1
Podapp v2
Podapp v2
Podapp v2
100@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Podapp v1
Podapp v2
Podapp v2
Podapp v2
101@kubernetesio @bretmcg @_askcarter
Rolling Update
Node1 Node3Node2
Service
Podapp v2
Podapp v2
Podapp v2
102102
LabRolling out updateshttp://github.com/bretmcg/kubernetes-workshop
103103
Implementing a CI/CD Pipeline on K8s
104@kubernetesio @bretmcg @_askcarter
1. Check in code
2. Build an Image
3. Test Image
4. Push Image to registry
5. Apply change to manifest files
Automating Deployments
105105
LabImplementing a CI/CD Pipeline on Kuberneteshttps://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes
Thank you!
kubernetes.io
@bretmcg @_askcarter