Build Production Ready Container Platform Based on Magnum and Kubernetes
Bo Wang [email protected] HouMing Wang [email protected]
Contents
Magnum weakness
Cluster initialization Mapping keystone user to Harbor
How to integrate features into Magnum
Features for production ready container platform Private docker image registry CI/CD Tools Service discovery Monitor and Alarm Log collection and search
Magnum in M release
Magnum after N release
remove container operations, act as container infrastructure management service
Fuctions of Magnum
clustertemplate CREATE/DELETE/UPDATE
cluster CREATE/DELETE/UPDATE
cluster ca SIGN/ROTATE
quota CREATE/DELETE/UPDATE
it’s really not enough to meet customers’ requirements.
Use Case ---- CI/CD
push committrigger deploy
deployproduct launch
push image
Test Zone
Production Zone
Private image registry
Why private image registry security: proprietary code or confidential information, vulnerability analysis network issue: slow-speed network; Great Firewall internal private cloud: no access to internet
Functions of Harbor private/public projects images isolation among projects user authentication
CI/CD tools
Why continuous integration and continuous deployment tools Build faster, Test more, Fail less CI/CD tools help to reduce risk and delivery reliable software in short iterations. CI/CD has become one of the most common use cases of Docker early adopters.
Functions of Jenkins dynamic generate pod slave pipleline commit trigger timed task lots plugins ...
Internal DNS
example: service_A -----> service_B, without dns you must do it in following order create service_A get the clusterIP create service_B with the clusterIP as parameter
Why kube-dns cluster ip is dynamic, service name is permanent
Service Discovery
access service inside with internal DNS
Node 1 Node 2
pod_A_1 pod_B_1 pod_A_2pod_B_2
iptables iptablesendpoint_2
endpoint_1
kube_dns service_B
clusterIP
clusterIP
Internal DNS
Kubernetes DNS pod holds 3 containers: kubedns, dnsmasq and healthz.
The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve performance. The healthz container provides a single health check endpoint while performing dual healthchecks (for dnsmasq and kubedns).
The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the kubelet passes DNS configured using the --cluster-dns=10.0.0.10 flag to each container.
Service Discovery
access Loadbalancer service outside
Node 1
app1_a
iptables
Node 2
app1_b
iptables
Node 3
app1_c
iptables
NodePort NodePort NodePort
CLoud LB
nodeIp:port
clusterIP:port
podIP:port
VIP:port
Ingress Controller
access service outside with ingress controller
Node 1
app1_a
Node 2
app1_b
Node 3
app1_c
service_url: port
podIP:port
service_url:port
Ingress Controller Ingress Controller Ingress Controller
Ingress
Ingress ressource pointing to that service:Service my-wordpress-svc with 2 pods
ingress is kubernetes resource which map url path to service:port
Ingress Controller
ingress controller is a reverse proxy, forward url to endpoint
Watch the Kubernetes API for any Ingress ressource change Update the configuration of the controller (create/update/delete endpoint, SSL certificate) Reload the configuration
Ingress Controller will detect ingress resources, fetch all the endpoints of the service
do not occupy ports on nodes support TLS access to service configurable loadbalance strategy
Ingress Controller
nginx configuration
Monitor and Alarm
Cadvisor
node exporter
Cadvisor
node exporter
Cadvisor
node exporter
Promethues altermanager
altera
node1
node2
node3
metrics
alarm event
Monitor and Alarm
cadvisor collect containers running info node-exporter collect nodes running info Promethues extract info from each node
metrics: node cpu usage node memory usage node filesystem usage node network I/O rates
container cpu usage container memory usage container network I/O rates
EFK
fluentd
elasticsearch kibana
node1
node2
node3
fluentd
fluentd
Cluster Architecture
Master
EFK
Slave
Kube-DNS
Ingress controller
Promethues
cluster
public network
Jenkins Master
slave slave
private network
VM
deploy
keystone user
harbor admin
keystone user
Cluster initialization
Share one harbor wiht multiple clusters to share public projects with all users Use heat to create nova instance and do configure to run harbor
For each cluster: Jenkins master runs as service. Jenkins salve run as pod dynamically to create/delete
kube-dns runs as service with three containers running in backend ingress controller runs as rc with one(more) replicas with default-backend-service
node-exporter runs as daemon set on each node Promethues runs as serverice
fluentd runs as daemon set on each node elasticsearch runs as service kibana runs as service
Mapping keystone user to harbor
Dashboard
Magnum-api Harbor
One keystone project ----> harbor user
One keystone project ----> harbor project
create user/project
keystone user
get images
push image