12
CloudHub Fabric Shanky Gupta

Cloudhub fabric

Embed Size (px)

Citation preview

Page 1: Cloudhub fabric

CloudHub Fabric

Shanky Gupta

Page 2: Cloudhub fabric

Introduction– CloudHub Fabric provides scalability, workload distribution, and

added reliability to CloudHub applications. These capabilities are powered by CloudHub’s scalable load-balancing service, Worker Scaleout, and Persistent Queues features.

– You can Enable CloudHub Fabric Features on a per-application basis using the Runtime Manager console when you deploy a new application or redeploy an existing application.

Page 3: Cloudhub fabric

Prerequisites– CloudHub Fabric requires a CloudHub Enterprise or Partner

account type. This document assumes that you have an account type that allows you to use this feature, and that you are familiar with deploying applications using the Runtime Manager console.

Page 4: Cloudhub fabric

Worker Scaleout– CloudHub allows you to select an amount and a size for

the workers of your application, providing horizontal scalability. This fine-grained control over computing capacity provisioning gives you the flexibility to scale up your application to handle higher loads (or scale down during low-load periods) at any time.

Page 5: Cloudhub fabric

Worker Scaleout …continuedUse the drop-down menus next to workers to pick the amount and a size for the workers of your application and configure the computing power that you need.

Page 6: Cloudhub fabric

Worker Scaleout …continued– Each application can be deployed with up to 4 workers of any kind. However, you may

be limited to fewer vCores than those you need, based on how many are available in your subscription. See Worker Sizing for more information about deploying to multiple vCores.

– Worker scale out also adds additional reliability. MuleSoft automatically distributes multiple workers for the same application across two or more datacenters for maximum reliability.

– When deploying your application to two or more workers, you can distribute workloads across these instances of Mule. CloudHub provides two facilities to do this:

– The HTTP load balancing service automatically distributes HTTP requests among your assigned workers.

– Persistent message queues

Page 7: Cloudhub fabric

Persistent Queues

– Persistent queues ensure zero message loss and let you distribute workloads across a set of workers.– If your application is deployed to more than one worker, persistent

queues allow communication between workers and workload distribution. For example, if a large file is placed in the queue, your workers can divide it up and process it in parallel.

– Persistent queues guarantees delivery of your messages, even if one or more workers or datacenters go down, providing additional message security for high-stakes processing.

Page 8: Cloudhub fabric

Persistent Queues …continued– With persistent queues enabled on your application, you have

runtime visibility into your queues on the Queues tab in the Runtime Manager console.

– You can enable data-at-rest encryption for all your persistent queues. By enabling this feature, you ensure that any shared application data written out to a persistent queue is encrypted, allowing you to meet your security and compliance needs.

– Retention time for messages in a persistent queue is up to 4 days. Messages can be up to 256 KB in length. There is no limit on the number of messages in a persistent queue.

Page 9: Cloudhub fabric

Enabling CloudHub Fabric Features– You can enable and disable either or both features of CloudHub Fabric in one of

two ways:– When deploying an application to CloudHub for the first time using the Runtime

Manager console

– By accessing the Deployment tab in the Runtime Manager console for a previously-deployed application

– Next to Workers, select options from the drop-down menus to define the number and type of workers assigned to your application.

– Click an application to see the overview and click Manage Application. Click Settings and click the Persistent Queues checkbox to enable queue persistence.

Page 10: Cloudhub fabric

Enabling CloudHub Fabric Features …continued

*** If your application is already deployed, you need to redeploy it in order for your new settings to take effect. 

Page 11: Cloudhub fabric

How CloudHub Fabric is Implemented

– Internally, worker scaling and VM queue load balancing is implemented using Amazon SQS.

– Object stores in your applications are also always stored persistently using Amazon SQS, even if you did not enable Persistent queues for your application.

– HTTP load balancing is implemented by an internal reverse proxy server. Requests to the application (domain) URL http://appname.cloudhub.io are automatically load balanced between all the application’s Worker URLs.

– Clients can by-pass the CloudHub Fabric load balancer by using a Worker’s direct URL.

Page 12: Cloudhub fabric

Persistent Queuing Behavior for Applications Containing Batch Jobs– When you deploy an application containing batch jobs to CloudHub

with persistent queues enabled, the batch jobs use CloudHub’s persistent queuing feature for the batch queuing functionality to ensure zero message loss. However, there are two limitations:– Batch jobs using CloudHub persistent queues experience additional latency

– CloudHub persistent queues occasionally process a message more than once. If your use case requires that each message be guaranteed to be processed only once, consider deploying the application without enabling persistent queues.