Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
K8s Workshop GuideRevision: 1.0
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
• The following takes about 15-20 minutes: – tar xvfz k8s-workshop.tar.gz – cd k8s-lab-seed – vagrant box add k8s-basebox-ubuntu.box --name k8s-basebox-ubuntu – vagrant up – vagrant halt # to shut them down
Workshop Prep
2
• Install: – Vagrant 1.8.5 (minimum) – VirtualBox 5.0.26
All files available at: https://drive.google.com/open?id=0BydSpIQ88Z2ZTlF2SDktdTY4V2s
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Kubernetes Architecture
3
API Server
REST Services
Master Node
Worker Nodes
controller-‐managerscheduler
Scheduling
Clients (kubectl)
etcd
Containe
r Engine
Pod
ContainerContainer
Pod
ContainerContainer
Pod
ContainerContainer
Synch Services & Service Endpoints
Synchronize desired state & actual state
iptables
kube-‐proxy
kubeletcAdvisor
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
• There are basically two (2) ways to implement this architecture – Bare Metal (or VM) install — Today’s Meetup – Container install — Next Meetup
Installing and Configuring
4
Are there easier ways to do this? Short answer…YES!!
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Kubernetes Lab Cluster
5
Vagrant Name Hostname Control Plane IP Data Plane IP Flannel Network IP Docker Network IP
master kube-‐master 10.1.2.10/24 192.168.33.10 172.17.xx.0/16 172.17.xx.1/24
worker1 kube-‐worker1 10.1.2.11/24 192.168.33.11 172.17.yy.0/16 172.17.yy.1/24
worker2 kube-‐worker2 10.1.2.12/24 192.168.33.12 172.17.zz.0/16 172.17.zz.1/24
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Lab Topology
6
Master/Worker
Data Plane 192.168.33.0/24
flannel.1
Worker
flannel.1
Worker
flannel.1
Overlay Network 172.17.0.0/16
docker0 docker0 docker0
Control Plane 10.1.2.0/2410.1.2.10 10.1.2.11 10.1.2.12
192.168.33.12192.168.33.11192.168.33.10
etcd
kube
-‐master
kube
-‐worker1
kube
-‐worker2
k8s k8s k8s
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC 7
Let’s get started!!
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
• Bring up your VMs
Per-VM Basics
8
tar xvfz k8s-lab-seed.tar.gzcd k8s-lab-seedvagrant box add k8s-basebox-ubuntu.box --name k8s-basebox-ubuntuvagrant up
vagrant ssh master
• Access the master
10.1.2.10 kubernetes10.1.2.10 kube-master10.1.2.11 kube-worker110.1.2.12 kube-worker2
• Since we don’t have DNS, let’s add some entries to /etc/hosts (remove the 127.0.0.1 entry for the specific host)
vagrant ssh worker1vagrant ssh worker2
• Repeat host file update for other 2 VMs
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC 9
Master Node
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Etcd
10
• Get Etcd (we’re going to just run it on the master node)
vagrant ssh mastersudo apt-get install -y etcd
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379, http://0.0.0.0:4001"ETCD_LISTEN_PEER_URLS="http://localhost:2380"ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
• Edit the Etcd config file (/etc/default/etcd) and add the following
sudo systemctl restart etcd.servicesudo systemctl status etcd.service -l --no-pager
• Restart Etcd and check its status
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Prep for later
11
• Some prep for later
sudo mkdir -p /etc/kubernetes/ssl/sudo mkdir -p /etc/kubernetes/manifests/mkdir ~/certs
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Certificates
12
• Generate the CA private key
cd ~/certsopenssl genrsa -out ca-key.pem 2048
cd ~/certsopenssl genrsa -out ca-key.pem 2048openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
• Generate the root CA and self-sign it with our CA private key
openssl genrsa -out apiserver-key.pem 2048
• Generate the API server private key
cp /vagrant/seed/certs/k8sSSL.cnf .openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config k8sSSL.cnf
• Create a certificate signing request (csr) using API private key
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Certificates
13
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver-cert.pem -days 365 -extensions v3_req -extfile k8sSSL.cnf
• Create and sign a certificate with the CA signing authority from the certificate request
sudo cp -va -t /etc/kubernetes/ssl/ ca.pem apiserver-cert.pem apiserver-key.pemsudo chmod 600 /etc/kubernetes/ssl/apiserver-key.pemsudo chown root:root /etc/kubernetes/ssl/apiserver-key.pem
• Finalize our certificates and keys
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Install k8s on Master (from seed)
14
export KUBE_BIN_PATH=/usr/bin/
• Setup some environment variables to help
sudo cp -va /vagrant/seed/conf/api-kubeconfig.yaml /etc/kubernetes/
• Stage the apiserver kubeconfig file, which contains references to SSL certificates and Kubernetes endpoints
sudo cp -va -t ${KUBE_BIN_PATH} /vagrant/seed/kubernetes/kube-apiserversudo cp -va -t ${KUBE_BIN_PATH} /vagrant/seed/kubernetes/kube-controller-managersudo cp -va -t ${KUBE_BIN_PATH} /vagrant/seed/kubernetes/kube-schedulersudo cp -va -t ${KUBE_BIN_PATH} /vagrant/seed/kubernetes/kubectlsudo cp -va /vagrant/seed/systemd/kube-apiserver.service /lib/systemd/systemsudo cp -va /vagrant/seed/systemd/kube-controller-manager.service /lib/systemd/systemsudo cp -va /vagrant/seed/systemd/kube-scheduler.service /lib/systemd/systemsudo chmod 755 /usr/bin/kube*
• Get the binaries and systemd files
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Install k8s on Master (ALTERNATE)
15
export STABLE_KUBE_VERSION=`wget -qO- https://storage.googleapis.com/kubernetes-release/release/stable.txt`export KUBE_BIN_PATH=/usr/bin/
• Setup some environment variables to help
sudo cp -va /vagrant/seed/conf/api-kubeconfig.yaml /etc/kubernetes/
• Stage the apiserver kubeconfig file, which contains references to SSL certificates and Kubernetes endpoints
sudo wget -N -P ${KUBE_BIN_PATH} http://storage.googleapis.com/kubernetes-release/release/${STABLE_KUBE_VERSION}/bin/linux/amd64/kube-apiserversudo wget -N -P ${KUBE_BIN_PATH} http://storage.googleapis.com/kubernetes-release/release/${STABLE_KUBE_VERSION}/bin/linux/amd64/kube-controller-managersudo wget -N -P ${KUBE_BIN_PATH} http://storage.googleapis.com/kubernetes-release/release/${STABLE_KUBE_VERSION}/bin/linux/amd64/kube-schedulersudo wget -N -P ${KUBE_BIN_PATH} http://storage.googleapis.com/kubernetes-release/release/${STABLE_KUBE_VERSION}/bin/linux/amd64/kubectlsudo cp -va /vagrant/seed/systemd/kube-apiserver.service /lib/systemd/systemsudo cp -va /vagrant/seed/systemd/kube-controller-manager.service /lib/systemd/systemsudo cp -va /vagrant/seed/systemd/kube-scheduler.service /lib/systemd/systemsudo chmod 755 /usr/bin/kube*
• Get the binaries and systemd files
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Configure the Master — Kubelet
16
sudo cp -va /vagrant/seed/conf/k8s-config /etc/kubernetes
• Get the Kubernetes System configuration file
# The IP address on which to advertise the apiserver to members of the cluster.ADVERTISE_ADDR="10.1.2.10"# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR=true# journal message level, 0 is debugKUBE_LOG_LEVEL="0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV=true# How the controller-manager, scheduler, and proxy find the apiserverKUBE_MASTER="http://kube-master:8080"# Secure - hostname should be what is used in API certificateKUBE_MASTER_SECURE="https://kubernetes:6443"# For DNS Service. Use an IP from CLUSTER_CIDR if using SkyDNS cluster add-onCLUSTER_DNS="10.200.10.80"
• Edit it and checkout the Kubernetes System configuration file at “/etc/kubernetes/k8s-config”
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Configure the Master — API
17
sudo cp -va /vagrant/seed/conf/apiserver /etc/kubernetes
• Get the Kubernetes API configuration file
# The address on the local server to listen to.KUBE_API_ADDRESS="0.0.0.0"# The port on the local server to listen on.KUBE_API_INSECURE_PORT="8080"KUBE_API_SECURE_PORT="6443"# Comma separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS="http://kube-master:2379"# Address range to use for servicesKUBE_SERVICE_ADDRESSES="10.200.10.0/24"# default admission control policiesKUBE_ADMISSION_CONTROL="NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"# Add your own!#KUBE_API_ARGS="--bind-address=0.0.0.0"#APISERVER_COUNT=1
• Edit it and checkout the Kubernetes System configuration file at “/etc/kubernetes/k8s-config”
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Configure the Master — Controller-Manager
18
sudo cp -va /vagrant/seed/conf/controller-manager /etc/kubernetes
• Get the Kubernetes Controller-Manager configuration file
# CIDR Range for Pods in clusterCLUSTER_CIDR="172.17.0.0/16"
• Edit it and checkout the Kubernetes System configuration file at “/etc/kubernetes/k8s-config”
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Restart and Verify Operation
19
for SERVICE in `ls /lib/systemd/system/kube*`dosudo systemctl enable `basename $SERVICE`sudo systemctl restart `basename $SERVICE`done
• Enable and Start all k8s services
for SERVICE in `ls /lib/systemd/system/kube*`dosudo systemctl status `basename $SERVICE` -l --no-pagerdonekubectl get componentstatuses
• Verify whether Kubernetes services are running
sudo netstat -tulnp | grep -E "(kube)|(etcd)"
• Verify whether Kubernetes services are bind to the right ports on Master node
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC 20
Worker Node
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Prep for later
21
• Some prep for later
sudo mkdir -p /etc/kubernetes/ssl/sudo mkdir -p /etc/kubernetes/manifests/mkdir ~/certs
vagrant ssh worker1 # or worker2
• Access a worker node
• Repeat for both workers (worker1 and worker2)
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Certificates
22
• Return to your master node
vagrant ssh master
cd ~/certsopenssl genrsa -out worker-key.pem 2048
• Generate the worker node private key
cp /vagrant/seed/certs/k8s-worker.cnf .openssl req -new -key worker-key.pem -out worker.csr -subj "/CN=kube-worker" -config k8s-worker.cnf
• Create a certificate signing request using worker private key
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Certificates
23
openssl x509 -req -in worker.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out worker-cert.pem -days 365 -extensions v3_req -extfile k8s-worker.cnf
• Create and sign a certificate with the CA signing authority (created earlier) from the certificate request
sudo cp -va -t /etc/kubernetes/ssl/ ca.pem worker-cert.pem worker-key.pemsudo chmod 600 /etc/kubernetes/ssl/worker-key.pemsudo chown root:root /etc/kubernetes/ssl/worker-key.pem
• Finalize our certificates and keys for the master (as a worker node)
export WORKER_IP="<<WORKER_IP from table>>"export VAGRANT_INSTANCE="<<vagrant name from table>>"scp -i /vagrant/.vagrant/machines/$VAGRANT_INSTANCE/virtualbox/private_key ca.pem ubuntu@$WORKER_IP:~/certsscp -i /vagrant/.vagrant/machines/$VAGRANT_INSTANCE/virtualbox/private_key worker-cert.pem ubuntu@$WORKER_IP:~/certsscp -i /vagrant/.vagrant/machines/$VAGRANT_INSTANCE/virtualbox/private_key worker-key.pem ubuntu@$WORKER_IP:~/certs
• Copy certs to worker nodes (worker1 and worker2) — see slide 5 or below
Vagrant Name Control Plane IP
worker1 10.1.2.11/24
worker2 10.1.2.12/24
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Certificates
24
vagrant ssh worker1 # or worker2 or master (as a worker node)
• Access a worker node
cd ~/certssudo cp -va -t /etc/kubernetes/ssl/ ca.pem worker-cert.pem worker-key.pemsudo chmod 600 /etc/kubernetes/ssl/worker-key.pem
• Put the certificates where they can be referenced
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Install k8s on Worker (from seed)
25
export KUBE_BIN_PATH=/usr/bin/
• Setup some environment variables to help
sudo cp -va /vagrant/seed/conf/worker-kubeconfig.yaml /etc/kubernetes/
• Stage the apiserver kubeconfig file, which contains references to SSL certificates and Kubernetes endpoints
sudo cp -va -t ${KUBE_BIN_PATH} /vagrant/seed/kubernetes/kube-proxysudo cp -va -t ${KUBE_BIN_PATH} /vagrant/seed/kubernetes/kubeletsudo cp -va /vagrant/seed/systemd/kubelet.service /lib/systemd/systemsudo cp -va /vagrant/seed/systemd/kube-proxy.service /lib/systemd/systemsudo chmod 755 /usr/bin/kube*
• Get the binaries and systemd files
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Install k8s on Worker (ALTERNATE)
26
export STABLE_KUBE_VERSION=`wget -qO- https://storage.googleapis.com/kubernetes-release/release/stable.txt`export KUBE_BIN_PATH=/usr/bin/
• Setup some environment variables to help
sudo cp -va /vagrant/seed/conf/worker-kubeconfig.yaml /etc/kubernetes/
• Stage the apiserver kubeconfig file, which contains references to SSL certificates and Kubernetes endpoints
sudo wget -N -P ${KUBE_BIN_PATH} http://storage.googleapis.com/kubernetes-release/release/${STABLE_KUBE_VERSION}/bin/linux/amd64/kube-proxysudo wget -N -P ${KUBE_BIN_PATH} http://storage.googleapis.com/kubernetes-release/release/${STABLE_KUBE_VERSION}/bin/linux/amd64/kubeletsudo cp /vagrant/seed/systemd/kube-proxy.service /lib/systemd/systemsudo cp /vagrant/seed/systemd/kubelet.service /lib/systemd/systemsudo chmod 755 /usr/bin/kube*
• Get the binaries and systemd files
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Configure the Worker — Kubelet
27
sudo cp -va /vagrant/seed/conf/k8s-config /etc/kubernetes
• Get the Kubernetes System configuration file (skip for master…already performed previously)
# The IP address on which to advertise the apiserver to members of the cluster.ADVERTISE_ADDR="10.1.2.10"# logging to stderr means we get it in the systemd journalKUBE_LOGTOSTDERR=true# journal message level, 0 is debugKUBE_LOG_LEVEL="0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV=true# How the controller-manager, scheduler, and proxy find the apiserverKUBE_MASTER="http://kube-master:8080"# Secure - hostname should be what is used in API certificateKUBE_MASTER_SECURE="https://kubernetes:6443"# For DNS Service. Use an IP from CLUSTER_CIDR if using SkyDNS cluster add-onCLUSTER_DNS="10.200.10.80"
• Edit it and checkout the Kubernetes System configuration file at “/etc/kubernetes/k8s-config”
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Restart and Verify Operation
28
for SERVICE in `ls /lib/systemd/system/kube*`dosudo systemctl enable `basename $SERVICE`sudo systemctl restart `basename $SERVICE`done
• Enable and Start all k8s services
for SERVICE in `ls /lib/systemd/system/kube*`dosudo systemctl status `basename $SERVICE` -l --no-pagerdone
• Verify whether Kubernetes services are running
sudo netstat -tulnp | grep -E "(kube)|(etcd)"
• Verify whether Kubernetes services are bind to the right ports on Master node
• Repeat from slide 24 for remaining worker nodes
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Cluster Verification
29
vagrant ssh master
• Access master node “kube-master”
kubectl get nodeskubectl get podskubectl get rckubectl get serviceskubectl get endpointskubectl cluster-info
• Checkout a few things about your cluster
Quiz: What do you expect to see in each commands’ output? Do we have node/pod/rc yet? Do we have service and endpoint yet? How about cluster? Which namespace are we working in now?
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC 30
Network Overlay — flannel
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Flannel Architecture
31
• docker "--bip=" defines a subnet for the Docker bridge selected from the larger overlay
• flannel daemon - sets up flannel iptables, subnet leases, populates FDB and ARP tables
IPTABLES Masquerading**
flannel I/F 172.17.82.0/16
flannel I/F 172.17.100.0/16
flannel I/F 172.17.84.0/16
docker bridge 172.17.82.1/24
docker bridge 172.17.100.1/24
docker bridge 172.17.84.1/24
(more specific) kernel routes & iptables(less specific)
(more specific) kernel routes & iptables(less specific)
(more specific) kernel routes & iptables(less specific)
ethN ethN ethN
POD
C C
POD
C C
POD
C C
VXLAN tunnels
Physical Network
**This can be disabled using “-‐-‐ip-‐masq=false".
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Prep Work
32
vagrant ssh master # or worker 1 or worker2
• Access a worker node (master, worker1, or worker2)
sudo systemctl stop docker;sleep 2# Do some cleanup from the previously running servicesudo mv /var/lib/docker ~/old-var-lib-docker-dir
• Shutdown Docker engine and cleanup stale data
sudo cp /vagrant/seed/flannel/flanneld /usr/binsudo cp /vagrant/seed/systemd/flanneld.service /lib/systemd/system
• Get the flannel binaries (from seed, v0.5.5)
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Setup and Configure flannel
33
{"Network": "172.17.0.0/16","SubnetLen": 24,"Backend": { "Type": "vxlan", "VNI": 1 }}
• Create a JSON of your flannel network (you can just get it from the seed directory…see next step)
sudo mkdir /etc/flannelsudo cp -va /vagrant/seed/conf/flannel-config.json /etc/flannel# port our config into Etcdetcdctl set k8s-cluster/network/config < /etc/flannel/flannel-config.jsonetcdctl get k8s-cluster/network/config
• flannel uses Etcd, but particularly for the config. So let’s port it into Etcd
Performed on Master Node only
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Setup and Configure flannel
34
sudo cp -va /vagrant/seed/conf/flannel /etc/default/flannel
• Get the flannel config file (from seed)
# etcd url location. Point this to the server where etcd runsFLANNEL_ETCD_ENDPOINTS="http://kube-master:2379, http://kube-master:4001"# etcd config key. This is the configuration key that flannel queries# For address range assignmentFLANNEL_ETCD_KEY="k8s-cluster/network"# Interface where flannel will sitIPV4_IFACE="<<IP_ADDRESS_FROM_DATA_PLANE_IP_COLUMN>>"# Disable / Enable masqueradingIP_MASQ=False# Location of flannel-config.jsonFLANNEL_CONFIG="/etc/flannel/flannel-config.json"# Any additional options that you want to pass#FLANNEL_OPTIONS=""
• Take a look at it’s content so we can tell flannel where its interfaces will be (see slide 5 or table below)
Vagrant Name Hostname Control Plane IP Data Plane IP Flannel Network IP Docker Network IP
master kube-‐master 10.1.2.10/24 192.168.33.10 172.17.xx.0/16 172.17.xx.1/24
worker1 kube-‐worker1 10.1.2.11/24 192.168.33.11 172.17.yy.0/16 172.17.yy.1/24
worker2 kube-‐worker2 10.1.2.12/24 192.168.33.12 172.17.zz.0/16 172.17.zz.1/24
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Start and Verify flannel
35
sudo systemctl daemon-reloadsudo systemctl enable flanneldsudo systemctl start flanneld
• Enable and start flannel services
sudo systemctl status flanneld -l --no-pager
• Verify flannel is running
sudo cp /vagrant/seed/systemd/docker.service /lib/systemd/systemsudo systemctl daemon-reloadsudo systemctl start docker
• Adapt docker.service to work with flannel
sudo systemctl status docker -l --no-pager
• Verify Docker is running
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Start and Verify flannel
36
ip addressip route
• Check the local host networking
for SERVICE in `ls /lib/systemd/system/kube*`dosudo systemctl restart `basename $SERVICE`done
• Restart the k8s services
Quiz: What do you notice about the networking? Does your understanding of the networking setup by flanneld and the docker0 bridge IP align with your thinking?
• Repeat from slide 32 for remaining worker nodes
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Network Verification
37
bridge fdb show dev flannel.1
• Verify flannel networking is established on each node (master, worker1, worker2)
Quiz: Notice the VXLAN peer endpoints. Do these look like the correct peer endpoints?
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC 38
Does it work?
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Test drive
39
--- apiVersion: "extensions/v1beta1" kind: Deployment metadata: name: "cluster-test" namespace: "default" labels: k8s-app: "k8s-test" version: "v1" kubernetes.io/cluster-service: "true" spec: replicas: 3 selector: matchLabels: k8s-app: "k8s-test" version: "v1" template: metadata: labels: k8s-app: "k8s-test" version: "v1" kubernetes.io/cluster-service: "true"
• Create a Simple Deployment and test it out spec: containers: - name: i-am-alive image: ubuntu:latest env: - name: MESSAGE value: "I'm Alive!!!!" - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP command: ["/bin/bash","-c"] args: ["i=0;while true; do sleep 10; echo \"${i}: ${MESSAGE} On ${MY_POD_NAME}@${MY_POD_IP}\";i=$((i+1)); done"]
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Test drive
40
vagrant ssh master# download yaml from Google Drive here: https://drive.google.com/open?id=0BydSpIQ88Z2ZTlF2SDktdTY4V2skubectl create -f im_alive.yaml
• Start the Deployment
kubectl get pods -w
• Watch the pods’ statuses until 3 unique instances are running
kubectl get deployment cluster-test
• Check the Deployment, if you want
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Test Drive
41
for pod in `kubectl get pods -l k8s-app=k8s-test | grep -v "^NAME" | awk '{print $1}'`doecho $pod ":"kubectl logs $pod | tail -n 2echo ""done
• Check each pod’s logs
Quiz: Notice the log messages. Do the IP addresses of each Pod align with your expectation for the node that the pod is running on?
kubectl delete pod <pod-name>
• Try deleting a pod and check the pods and the pods’ logs again.
kubectl scale deployment cluster-test --replicas=4
• Try increasing the number of replicas, then check the logs
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC
Cleanup (optional)
42
vagrant ssh masterkubectl delete -f im_alive.yaml # or kubectl delete deployment cluster-test
• Clean up Deployment, if you want to
vagrant destroy # --force if you don’t want to be prompted to delete
• Exit VirtualBox instances, and delete if you want to, OR…
vagrant halt
• …just stop them and play later
TriK8s Meetup Workshop © 2016 - 2017 CloudPerceptions, LLC 43