Upload
marklucovsky
View
1.010
Download
1
Embed Size (px)
Citation preview
developers perspective
mark lucovsky vp of engineering, cloud foundry
agenda
• cloud foundry – PaaS
• sample app: • polyglot in action • node • redis • json • ruby • html5 • jQuery • multi-tier • horizontally scalable • vmc manifest • etc.
2 developer perspective v2.0
cloud foundry
3 developer perspective v2.0
cloud foundry: open paas
• active open source project, liberal license
• infrastructure neutral core, runs on any IaaS/Infra
• extensible runtime/framework, services architecture • node, ruby, java, scala, erlang, etc. • postgres, neo4j, mongodb, redis, mysql, rabbitmq
• clouds: raw infra structure, to fully managed (AppFog)
• VMware’s delivery forms • raw bits and deployment tools on GitHub • Micro Cloud Foundry • cloudfoundry.com
4 developer perspective v2.0
key abstractions
• applications
• instances
• services
• vmc – cli (based almost 1:1 on control api)
5 developer perspective v2.0
hello world: classic
6
$ cat hw.c #include <stdio.h> main() { printf(“Hello World\n”); }
$ cc hw.c; ./a.out
developer perspective v2.0
hello world of the cloud
7
$ cat hw.rb require 'rubygems' require 'sinatra' $hits = 0 get '/' do $hits = $hits + 1 "Hello World -‐ #{$hits}" end
$ vmc push hw
developer perspective v2.0
8 developer perspective v2.1
cc hw.c
vmc push hw
hello world of the cloud: scale it up
9
$ vmc instances hw 10 get '/' do $hits = $hits + 1 "Hello World -‐ #{$hits}" end # above code is broken for > 1 instance # move hit counter to redis, hi-‐perf K/V store $ vmc create-‐service redis –bind hw get '/' do $hits = $redis.incr(‘hits’) "Hello World -‐ #{$hits}" end
developer perspective v2.0
vmc command line tooling Create app, update app, control app vmc push [appname] [-‐-‐path] [-‐-‐url] [-‐-‐instances N] [-‐-‐mem] [-‐-‐no-‐start] vmc update <appname> [-‐-‐path PATH] vmc stop <appname> vmc start <appname> vmc target [url] Update app settings, get app information vmc mem <appname> [memsize] vmc map <appname> <url> vmc instances <appname> <num | delta> vmc {crashes, crashlogs, logs} <appname> vmc files <appname> [path] Deal with services, users, and information vmc create-‐service <service> [-‐-‐name servicename] [-‐-‐bind appname] vmc bind-‐service <servicename> <appname> vmc unbind-‐service <servicename> <appname> vmc delete-‐service <servicename> vmc user, vmc passwd, vmc login, vmc logout, vmc add-‐user vmc services, vmc apps, vmc info
10 developer perspective v2.0
sample app
11 developer perspective v2.0
12 developer perspective v2.0
stac2: load generation system
13 developer perspective v2.0
redis
stac2 frontend
api server
vmc worker http worker
http json
redis api rpush
blpop blpop redis api
- 2 x 128mb - ruby 1.8.7, sinatra
- 16 x 128mb* - node.JS, 0.6.8
- 16 x 128mb* - node.JS, 0.6.8
- 96 x 128mb - ruby 1.8.7, sinatra
json-p - jQuery, jQuery UI - haml templates - 100% JS based UI
* - api server and http worker share the same node.JS process/instance
email reports
smtp
deployment instructions
14 developer perspective v2.0
$ cd ~/stac2 $ vmc push
how is this possible? $ cd ~/stac2; cat manifest.yml applications: ./nabh: instances: 16 mem: 128M runtime: node06 url: ${name}.${target-‐base} services: nab-‐redis: type: :redis ./nabv: instances: 96 mem: 128M runtime: ruby18 url: ${name}.${target-‐base} services: nab-‐redis: type: :redis ./stac2: instances: 2 mem: 128M runtime: ruby18 url: ${name}.${target-‐base}
15 developer perspective v2.0
design tidbits
16 developer perspective v2.0
• producer/consumer pattern using rpush/blpop
• node.JS: multi-server and high performance async i/o
• caldecott – aka vmc tunnel for debugging
• redis sorted sets for stats collection
• redis expiring keys for rate calculation
producer/consumer
• core design pattern
• found at the heart of many complex apps
17 developer perspective v2.0
classic mode: - thread pools - semaphore/mutex, completion ports, etc. - scalability limited to visibility of the work queue
consumer work queue producer work work
cloud foundry mode: - instance pools - redis rpush/blpop, rabbit queues, etc. - full horizontal scalability, cloud scale
producer/consumer: code
18 developer perspective v2.0
// producer function commit_item(queue, item) { // push the work item onto the proper queue redis.rpush(queue, item, function(err, data) { // optionally trim the queue, throwing away // data as needed to ensure the queue does // not grow unbounded if (!err && data > queueTrim) { redis.ltrim(queue, 0, queueTrim-‐1); } }); }
// consumer function worker() { // blocking wait for workitems blpop_redis.blpop(queue, 0, function(err, data) { // data[0] == queue, data[1] == item if (!err) { doWork(data[1]); } process.nextTick(worker); }); }
node.JS multi-server: http API server
19 developer perspective v2.0
// the api server handles two key load generation apis // /http – for http load, /vmc for Cloud Foundry API load var routes = {“/http”: httpCmd, “/vmc”: vmcCmd} // http api server booted by app.js, passing redis client // and Cloud Foundry instance function boot(redis_client, cfinstance) { var redis = redis_client; function onRequest(request, response) { var u = url.parse(request.url); var path = u.pathname; if (routes[path] && typeof routes[path] == ‘function’) { routes[path](request, response); } else { response.writeHead(404, {‘Content-‐Type’: ‘text/plain’}); response.write(‘404 Not Found’); response.end(); } } server = http.createServer(onRequest).listen(cfinstance[‘port’]); }
node.JS multi-server: blpop server
20 developer perspective v2.0
var blpop_redis = null; var status_redis = null; var cfinstance = null; // blpop server handles work requests for http traffic // that are placed on the queue by the http API server // another blpop server sits in the ruby/sinatra VMC server function boot(r1, r2, cfi) { // multiple redis clients due to concurrency constraints blpop_redis = r1; status_redis = r2; cfinstance = cfi; worker(); } // this is the blpop server loop function worker() { blpop_redis.blpop(queue, 0, function(err, data) { if (!err) { doWork(data[1]); } process.nextTick(worker); }); }
caldecott: aka vmc tunnel
21 developer perspective v2.0
# create a caldecott tunnel to the redis server $ vmc tunnel nab-‐redis redis-‐cli Binding Service [nab-‐redis]: OK … Launching 'redis-‐cli -‐h localhost -‐p 10000 -‐a ...’ # enumerate the keys used by stac2 redis> keys vmc::staging::* 1) “vmc::staging::actions::time_50” 2) “vmc::staging::active_workers” … # enumerate actions that took less that 50ms redis> zrange vmc::staging::actions::time_50 0 -‐1 withscores 1) “delete_app” 2) “1” 3) “login” 4) “58676” 5) “info” 6) “80390” # see how many work items we dumped due to concurrency constraint redis> get vmc::staging::wastegate “7829”
redis sorted sets for stats collection
22 developer perspective v2.0
# log action into a sorted set, net result is set contains # actions and the number of times the action was executed # count total action count, and also per elapsed time bucket def logAction(action, elapsedTimeBucket) # actionKey is the set for all counts # etKey is the set for a particular time bucket e.g., _1s, _50ms actionKey = “vmc::#{@cloud}::actions::action_set” etKey = “vmc::#{@cloud}::actions::times#{elapsedTimeBucket}” @redis.zincrby actionKey, 1, action @redis.zincrby etKey, 1, action end # enumerate actions and their associated count redis> zrange vmc::staging::actions::action_set 0 -‐1 withscores 1) “login” 2) “212092” 3) “info” 4) “212093” # enumerate actions that took between 400ms and 1s redis> zrange vmc::staging::actions::time_400_1s 0 -‐1 withscores 1) “create-‐app” 2) “14” 3) “bind-‐service” 4) “75”
redis incrby and expire for rate calcs
23 developer perspective v2.0
# to calculate rates (e.g., 4,000 requests per second) # we use plain old redis.incrby. the trick is that the # key contains the current 1sec timestamp as it’s suffix value # all activity that happens within this 1s period accumulates # in that key. by setting an expire on the key, the key is # automatically deleted 10s after last write def logActionRate(cloud) tv = Time.now.tv_sec one_s_key = "vmc::#{cloud}::rate_1s::#{tv}" # increment the bucket and set expires, key # will eventually expires Ns after the last write @redis.incrby one_s_key, 1 @redis.expire one_s_key, 10 end # return current rate by looking at the bucket for the previous # one second period. by looking further back and averaging, we # can smooth the rate calc def actionRate(cloud) tv = Time.now.tv_sec -‐ 1 one_s_key = "vmc::#{cloud}::rate_1s::#{tv}" @redis.get one_s_key end
24 developer perspective v2.0
www.cloudfoundry.com/jobs
25 developer perspective v2.0