36
Copyright © GigaSpaces 2014. All rights reserved. 1 Copyright © GigaSpaces 2016. All rights reserved. Orchestrating In Hybrid Environments DeWayne Filppi Gigaspaces Technologies 2/10/2016 Kubernetes, Openstack, and beyond

Hybrid cloud openstack meetup

  • Upload
    dfilppi

  • View
    415

  • Download
    1

Embed Size (px)

Citation preview

Copyright © GigaSpaces 2014. All rights reserved.1 Copyright © GigaSpaces 2016. All rights reserved.

Orchestrating In Hybrid Environments

DeWayne FilppiGigaspaces Technologies2/10/2016

Kubernetes, Openstack, and beyond

Copyright © GigaSpaces 2014. All rights reserved.2

Agenda

• Describe real-world effort orchestrating Kubernetes on Openstack

• Not just Kubernetes, but Kubernetes + non-Kubernetes hosted services in a single orchestration

• Add to that manual and automated metric-driven scaling inside Kubernetes, and of Kubernetes itself

Copyright © GigaSpaces 2014. All rights reserved.3

Flash Kubernetes Review

• Container Orchestration (docker, rkt)

• Master/minion architecture

• Containers considered immutable

• Pod concept

• Replication controller concept

• Service concept

• kubectl utility

Copyright © GigaSpaces 2014. All rights reserved.4

Kubernetes Overview

Copyright © GigaSpaces 2014. All rights reserved.5

Target Architecture

Copyright © GigaSpaces 2014. All rights reserved.6

Orchestration Steps

• Preparation– Create Docker containers for Nodejs and metrics collection– Instrument containers to configure externals– Create RC and service descriptor templates– Create Riemann config for threshold detection

• Runtime– Create network/subnet/router for VMs on Openstack– Create VMs for Kubernetes and MongoDb on network– Create VM for Riemann event processor– Deploy Riemann and start with detection config– Save Kubernetes, Mongo Ips/ports, Riemann IPs/ports– Install MongoDb on Mongo VM(s)– Set env vars in RC template (Mongo and Riemann location etc…)– ……

Copyright © GigaSpaces 2014. All rights reserved.7

Orchestration Steps

• Runtime (continued)

– Copy rendered templates to Kubernetes master

– Use kubectl on master to create replication controller (Pod with nodejs and collector)

• Causes nodejs to start and connect to external Mongodb cluster

• Causes metric collection to begin and pushed in Riemann

– Use kubectl on master to create service endpoint

• Creates externally accessible single IP to access Pod collection

Copyright © GigaSpaces 2014. All rights reserved.8

Orchestration Steps

• Runtime (continued)

– When threshold breached, execute kubectl rcscale.

– When Kubernetes runs out of space, expand Kubernetes by adding another node/minion and updating the master

– Wrap all the above with retry logic

Copyright © GigaSpaces 2014. All rights reserved.9

Analysis

• Automating that is a lot of work

• Complex/error prone

• Making cloud neutral difficult

Copyright © GigaSpaces 2014. All rights reserved.10

TOSCA Modeling

• Topology Orchestration for Cloud Applications (OASIS)

• Not really. More like Orchestration for Applications.

• Models deployments as DAG of “nodes”

• Nodes are anything that needs orchestrating (hardware, software, VMs, containers, networks, floating Ips, etc….)

• Everything connected by relationships

Copyright © GigaSpaces 2014. All rights reserved.11

TOSCA Modeling

Example

Copyright © GigaSpaces 2014. All rights reserved.12

TOSCA Modeling

• An orchestrator applies “workflows” to the node graph/model.

• TOSCA defines a type system that lets custom types to be defined.

• Types have operations tied to code

• Example:

– A “Server” node for Openstack would define operations needed to operate nova/neutron/cinder etc….

Copyright © GigaSpaces 2014. All rights reserved.13

Modeling Our Target Architecture

• Define custom types for Kubernetes:

– Master

– Minion/Node

– Define operations on these to install and configure Kubernetes

• Probably delegate to Salt, Puppet, Ansible, Chef, etc..

Copyright © GigaSpaces 2014. All rights reserved.14

Modeling Our Target Architecture

• Define custom types for MongoDb:

– Mongod, Mongocfg

– Mongos

– Define operations on these to install and configure MongoDb

• Probably delegate to Salt, Puppet, Ansible, Chef, etc..

Copyright © GigaSpaces 2014. All rights reserved.15

Modeling Our Target Architecture

• Define custom type for a generic Microservice

• Define a relationship that installs the microservice on the Kub master

• Note that the Microservice type can be configured to upload arbitrary to the master node

Copyright © GigaSpaces 2014. All rights reserved.16

Modeling Our Target Architecture

• Complete model (simplified)• Assume flat network

Copyright © GigaSpaces 2014. All rights reserved.17

Rendering The Architecture

• A standard workflow will walk the model and execute associated code.• First relationships are evaluated to determine order.• A tree of tasks is constructed, and the tasks are executed

• Note the VMs are independent.• They are all rendered/instantiated in parallel

Copyright © GigaSpaces 2014. All rights reserved.18

Rendering The Architecture

• Now the next independent set can be done• Install Master and Mongod

Copyright © GigaSpaces 2014. All rights reserved.19

Rendering The Architecture

• Now the last independent set can be done• Install “Node” and Microservice

Copyright © GigaSpaces 2014. All rights reserved.20

Note on Cloud Portability Concerns

• Three approaches– Abstraction (AKA “least common denominator”)

• Not necessarily all that bad if the targets are reasonably isomorphic.

– Pluggability• Future proof

• No “least common denominator”

• Potential difficulty translating between targets

– Hybrid • Write a plugin that serves as a façade for multiple

clouds

Copyright © GigaSpaces 2014. All rights reserved.21

Note on Cloud Portability Concerns

• Example Equivalent Node Typescloudify.nodes.Root:

interfaces:cloudify.interfaces.lifecycle:

create: {}configure: {}start: {}stop: {}delete: {}

cloudify.nodes.SecurityGroup:derived_from: cloudify.nodes.Root

cloudify.openstack.nodes.SecurityGroup:derived_from: cloudify.nodes.SecurityGroupproperties:

description:type: stringdefault: ''

rules:default: []

disable_default_egress_rules:default: false

interfaces:cloudify.interfaces.lifecycle:create:

implementation: openstack.neutron_plugin.security_group.create

inputs:args:

default: {}openstack_config:

default: {}delete:

implementation: openstack.neutron_plugin.security_group.delete

inputs:openstack_config:

default: {}

cloudify.aws.nodes.SecurityGroup:derived_from: cloudify.nodes.SecurityGroupproperties:description:

type: stringrequired: true

rules:default: []description: >

You need to pass in either src_group_id(security group ID) OR cidr_ip,

and then the following three: ip_protocol, from_port and to_port.

interfaces:cloudify.interfaces.lifecycle:

create: aws.ec2.securitygroup.createdelete: aws.ec2.securitygroup.delete

Copyright © GigaSpaces 2014. All rights reserved.22

Concrete Implementation

• Kubernetes – A server/VM definition

master_host:type: cloudify.openstack.nodes.Serverinstances:deploy: 1

properties:cloudify_agent:user: ubuntu

image: {get_input: image}flavor: {get_input: flavor}

relationships:- target: kubernetestype: cloudify.relationships.contained_in

- target: master_security_grouptype: cloudify.openstack.server_connected_to_security_group

• Under the hood, the orchestrator uses construct proper nova/neutron API calls.

• Same concept applies to Nodejs architecture

Copyright © GigaSpaces 2014. All rights reserved.23

Concrete Implementation

• Kubernetes – A security group definition

• Security groups are independent and created immediately

master_security_group:type: cloudify.openstack.nodes.SecurityGroupproperties:security_group:name: master_security_groupdescription: kubernetes master security group

rules:- remote_ip_prefix: 0.0.0.0/0 # for remote installport: 22

- remote_ip_prefix: 0.0.0.0/0port: { get_property: [ master, master_port ]}

- remote_ip_prefix: 0.0.0.0/0 # for minionsport: 4001

- remote_ip_prefix: 0.0.0.0/0 # for serviceport: 30000

Copyright © GigaSpaces 2014. All rights reserved.24

Concrete Implementation

• Kubernetes – Custom Microservice Typecloudify.kubernetes.Microservice:

derived_from: cloudify.kubernetes.Baseproperties:

name:description: the name of the service

image:description: the image to rundefault: ''

port:description: the port for the servicedefault: -1

target_port:description: the target port to mapdefault: {get_property: [SELF,port]}

protocol:description: the service protocol { TCP|UDP } TCP defaultdefault: TCP

replicas:description: the number of instances to rundefault: 1

run_overrides:description: json overrides for kubectl rundefault: '‘

……….

Copyright © GigaSpaces 2014. All rights reserved.25

Concrete Implementation

• Kubernetes – Custom Microservice Type: Behavior

cloudify.kubernetes.Microservice:

……….interfaces:

cloudify.interfaces.lifecycle:start:implementation: kubernetes.kube_plugin.tasks.kube_run_expose

stop:implementation: kubernetes.kube_plugin.remote_tasks.kube_delete

• Note that arbitrary code is tied to lifecycle methods.• Specific workflow recognize certain interfaces• The standard “install” and “uninstall” workflows recognize nodes that

implement “cloudify.interfaces.lifecycle”.• Note also that the “Microservice” type is simply automation, it’s not tied

to a VM, e.g. TOSCA ‘nodes’ can be purely logical constructs.

Copyright © GigaSpaces 2014. All rights reserved.26

Concrete Implementation

• Kubernetes – Custom Microservice Type: Behavior

• kubernetes.kube_plugin.tasks.kube_run_expose Is a reference to python function (kube_run_expose)

• It looks at the Microservice node and properties, finds the defined Kubernetes descriptor files, performs substitutions and deploys them in the Kub cluster.

Copyright © GigaSpaces 2014. All rights reserved.27

Concrete Implementation

• Kubernetes – Custom Microservice Type:

nodecellar:type: cloudify.kubernetes.Microserviceproperties:name: nodecellarssh_username: ubuntussh_keyfilename: /root/.ssh/agent_key.pemconfig_files:

- file: pod.yamloverrides:

- "['spec']['template']['spec']['containers'][0]['env'][1]['value'] = '@{mongo_proxy,mongo_info,ip}'"- "['spec']['template']['spec']['containers'][0]['env'][2]['value'] = '@{mongo_proxy,mongo_info,port}'"- "['spec']['template']['spec']['containers'][1]['env'][0]['value'] = '%{management_ip}'"- "['spec']['template']['spec']['containers'][1]['env'][2]['value'] = '%{deployment.id}'"- "['spec']['template']['spec']['containers'][1]['env'][3]['value'] = '%{node.id}'"- "['spec']['template']['spec']['containers'][1]['env'][4]['value'] = '%{instance.id}'"

- file: service2.yaml

relationships:- type: cloudify.kubernetes.relationships.connected_to_master

target: kubernetes_proxy- type: cloudify.relationships.connected_to_proxy

target: mongo_proxy

Copyright © GigaSpaces 2014. All rights reserved.28

Concrete Implementation

• Kubernetes Native DescriptorapiVersion: v1kind: ReplicationControllermetadata:name: nodecellar

spec:replicas: 1selector:app: nodecellar

template:metadata:name: nodecellarlabels:app: nodecellar

spec:containers:- name: nodecellarimage: dfilppi/nodecellar:v2workingDir: /command: ["bash","start.sh"]ports:- containerPort: 3000

hostPort: 3000hostIP: 0.0.0.0

env:- name: NODECELLAR_PORT

value: '3000'- name: MONGO_HOST

value: '15.125.81.204'

- name: MONGO_PORTvalue: '27400'

- name: diamonddimage: dfilppi/diamond:v1workingDir: /command: ["bash","start.sh"]ports:- containerPort: 5672hostPort: 5672hostIP: '0.0.0.0'

env:- name: CH_SERVERvalue: '10.67.79.2'

- name: CC_PORTvalue: '3000'

- name: CC_DEPLOYMENTvalue: ''

- name: CC_NODEvalue: ''

- name: CC_INSTANCEvalue:

Copyright © GigaSpaces 2014. All rights reserved.29

Concrete Implementation

• Kubernetes – Containers

• Orchestration info from blueprint passed in environment via overrides

• Container has simple start script that grabs environment and uses it to configure service

cfg=ConfigObj("/diamond/diamond.conf",list_values=False)cfg['collectors']['ConnCollector']['enabled']="true"cfg['collectors']['ConnCollector']['path_prefix']=os.getenv("CC_DEPLOYMENT")cfg['collectors']['ConnCollector']['port']=os.getenv("CC_PORT")cfg['collectors']['ConnCollector']['hostname']='.'.join([ip,os.getenv("CC_NODE"),os.getenv("CC_INSTANCE")])

cfg['handlers']['CloudifyHandler']['server']=os.getenv('CH_SERVER')cfg['handlers']['CloudifyHandler']['port']=os.getenv('CH_PORT',5672)cfg['handlers']['CloudifyHandler']['user']=os.getenv('CH_USER','cloudify')cfg['handlers']['CloudifyHandler']['password']=os.getenv('CH_PASSWORD','c10udify')cfg.write()

Copyright © GigaSpaces 2014. All rights reserved.30

Concrete Implementation

• Service Metrics Approach

– Scenario requires metrics from Nodejs

– Solution: provide Diamond collector container

– Deploy collector container in Pod with Nodejs

– Configure Diamond using container environment:

• Target host (server running Riemann)

• Information for queue. Point to correct deployment

Copyright © GigaSpaces 2014. All rights reserved.31

Concrete Implementation

• Service Metrics

– Diamond container pushes metrics to Riemann

• Sends number of connection in local Pod

– Riemann evaluates metrics. When threshold breached, executes Kubernetes scale workflow on orchestrator

– Orchestrator calls scale on Kubernetes master

Copyright © GigaSpaces 2014. All rights reserved.32

Concrete Implementation

• Reimann Overview

– High speed event processor

– Minimal state (only last sample of each metric)

– Calls user “streams” with metric

– Uses functional programming model

– Provides an API with utility streams and functions

– Streams are high order functions (functions that return functions)

– Implementation language is Clojure

Copyright © GigaSpaces 2014. All rights reserved.33

Concrete Implementation

(where (service #"{{service_selector}}")

(let [ hosts (atom #{}) ](fn [e]

; store or remove host from set depending on whether it has expired.(let [ key (str (:host e) "." (:service e))]

(do(if (expired? e)

(swap! hosts disj key)(swap! hosts conj key)

);save the host count(riemann.index/update index (assoc e :host nil :metric (max 1 (count @hosts)) :service "hostcount" ))

)))) )

(where (not (nil? (riemann.index/lookup index nil "hostcount")))(where (not (expired? event))

(moving-time-window {{moving_window_size}}(smap folds/mean(fn [ev]

(let [hostcnt (:metric (riemann.index/lookup index nil "hostcount"))conns (/ (:metric ev) (max hostcnt 1))cooling (not (nil? (riemann.index/lookup index "scaling" "suspended")))

](if (and (not cooling) ({{scale_direction}} hostcnt {{scale_limit}}) ({{scale_direction}} {{scale_threshold}} conns))(do

(process-policy-triggers ev)(riemann.index/update index {:host "scaling" :service "suspended" :time (unix-time) :description "cooldown flag" :metric 0 :ttl

{{cooldown_time}} :state "ok"}))))))))

Copyright © GigaSpaces 2014. All rights reserved.34

Microservice Recap

• Containers prepared for Pod/RC

• Orchestrator feeds connection info from Nodejs to Mongo and Diamond collector to Riemann via environment in Kub descriptors

• Orchestrator copies edited descriptors to Kub master

• Kubectl “create” called remotely

• Nodejs connects to Mongo in cloud

• Diamond connects to Riemann and pushes metrics

• Riemann commands orchestrator to scale Kubremotely (kubectl rc scale)

Copyright © GigaSpaces 2014. All rights reserved.35

Take Aways

• TOSCA makes complex orchstrations (more) understandable

• TOSCA hides cloud (or non-cloud) implementation details from orchestration itself

• An orchestrator can render a TOSCA blueprint on any infrastructure (containers, VMs, hardware, anything with an API)

• An orchestrator can coordinate other orchestrators in hybrid context (Kubernetes, Cloud, Virtual, Physical)

Copyright © GigaSpaces 2014. All rights reserved.36 Copyright © GigaSpaces 2016. All rights reserved.

Thank You