Upload
mircodotta
View
634
Download
1
Embed Size (px)
DESCRIPTION
The demands and expectations for applications have changed dramatically in recent years. Applications today are deployed on a wide range of infrastructure; from mobile devices up to thousands of nodes running in the cloud—all powered by multi-core processors. They need to be rich and collaborative, have a real-time feel with millisecond response time and should never stop running. Additionally, modern applications are a mashup of external services that need to be consumed and composed to provide the features at hand. We are seeing a new type of applications emerging to address these new challenges—these are being called Reactive Applications. In this talk we will discuss four key traits of Reactive; Event-Driven, Scalable, Resilient and Responsive—how they impact application design, how they interact, their supporting technologies and techniques, how to think when designing and building them—all to make it easier for you and your team to Go Reactive. (All credit for this great slidedeck goes to @rolandkuhn, while any inaccuracy is solely mine. The session was not recorded, but if you find the content interesting, you should definitely watch Roland Kuhn presenting it http://www.infoq.com/presentations/reactive-principles)
Citation preview
Mirco Dotta @mircodotta
Go ReactiveEvent-Driven, Scalable, Resilient & Responsive Systems
Reactive Applications
The Four Reactive Traits
2
http://reactivemanifesto.org/
Starting Point: The User
browserThe User
browser frontend server
internal service
internal service storage
DB
external service
browser
Responsiveness !
always available interactive
(near) real-time
Bounded Latency
6
Bounded Latency
• fan-out in parallel and aggregate
6
Bounded Latency
• fan-out in parallel and aggregate
6
A CB
Bounded Latency
• fan-out in parallel and aggregate
6
A
C
B
Bounded Latency
• fan-out in parallel and aggregate
6
A
C
B
e
Bounded Latency
• fan-out in parallel and aggregate • use circuit breakers for graceful degradation
7
Bounded Latency
• fan-out in parallel and aggregate • use circuit breakers for graceful degradation
7
do work
fail fast
Bounded Latency
• fan-out in parallel and aggregate • use circuit breakers for graceful degradation • use bounded queues, measure flow rates
8
Bounded Latency
• fan-out in parallel and aggregate • use circuit breakers for graceful degradation • use bounded queues, measure flow rates
8
Use Bounded Queues: !
Latency = QueueLength • ProcessingTime !
(for reasonably stable average processing time)
Bounded Latency
• fan-out in parallel and aggregate • use circuit breakers for graceful degradation • use bounded queues, measure flow rates
8
Use Bounded Queues: !
Latency = QueueLength • ProcessingTime !
(for reasonably stable average processing time)
m
Resilience !
Responsive in the Face of Failure
Handle Failure
• software will fail • hardware will fail • humans will fail • system still needs to respond ➟ resilience
10
Distribute!
11
• parallel fan-out & distribution ➟ asynchronous execution• compartmentalization & isolation• no response? ➟ timeout events• someone else’s exception? ➟ supervision
Asynchronous Failure
12
• parallel fan-out & distribution ➟ asynchronous execution• compartmentalization & isolation• no response? ➟ timeout events• someone else’s exception? ➟ supervision
Asynchronous Failure
12
• parallel fan-out & distribution ➟ asynchronous execution• compartmentalization & isolation• no response? ➟ timeout events• someone else’s exception? ➟ supervision
Asynchronous Failure
12
Request
Response
Failure
• parallel fan-out & distribution ➟ asynchronous execution• compartmentalization & isolation• no response? ➟ timeout events• someone else’s exception? ➟ supervision
Asynchronous Failure
12
Request
Response
Failure
message-driven
• parallel fan-out & distribution ➟ asynchronous execution• compartmentalization & isolation• no response? ➟ timeout events• someone else’s exception? ➟ supervision• location transparency ➟ seamless resilience
Asynchronous Failure
12
Elasticity !
Responsive in the Face of Changing Load
Handle Load
14
Handle Load
14
Handle Load
• partition incoming work for distribution • share nothing • scale capacity up and down on demand • supervise and adapt • location transparency
➟ seamless scalability
14
Handle Load
• partition incoming work for distribution • share nothing • scale capacity up and down on demand • supervise and adapt • location transparency
➟ seamless scalability
14
m
… this has some interesting consequences!
Consequences
• distribution & scalability ➟ loss of strong consistency • CAP theorem? — not as relevant as you think • eventual consistency
➟ gossip, heartbeats, dissemination of change !
16
Consequences
• distribution & scalability ➟ loss of strong consistency • CAP theorem? — not as relevant as you think • eventual consistency
➟ gossip, heartbeats, dissemination of change !
16
m
Corollary
• Reactive needs to be applied all the way down
17
But what about us, the developers?
Step 1: Take a Leap of Faith
• thread-based models have made us defensive • “don’t let go of your thread!” • “asynchrony is suspicious” • “better return strict value, even if that needs blocking”
19
Step 1: Take a Leap of Faith
• thread-based models have made us defensive • “don’t let go of your thread!” • “asynchrony is suspicious” • “better return strict value, even if that needs blocking”
• it is okay to write a method that returns a Future!
19
Step 2: Rethink the Architecture
• break out of the synchronous blocking prison • focus on communication & protocols • asynchronous program flow
➟ no step-through debugging ➟ tracing and monitoring • loose coupling
20
Step 3: Profit!
• clean business logic,separate from failure handling• distributable units of work• effortless parallelization
21
Step 3: Profit!
• clean business logic,separate from failure handling• distributable units of work• effortless parallelization• less assumptions ➟ lower maintenance cost
21
Summary
Reactive Applications
The Four Reactive Traits
23
http://reactivemanifesto.org/
©Typesafe 2014 – All Rights Reserved
All credit for this great slidedeck goes to Roland Kuhn (@rolandkuhn), while any inaccuracy is mine.