Upload
william-flynn
View
215
Download
2
Tags:
Embed Size (px)
Citation preview
CSC 536 Lecture 9
Outline
Case studyAmazon Dynamo
Brewer’s CAP theorem
Recovery
Dynamo:Amazon’s key-value storage system
Amazon Dynamo
A data store for applications that require:primary-key access to datadata size < 1MBscalabilityhigh availabilityfault toleranceand really low latency
No need forRelational DB
Complexity and ACID properties imply little parallelism and low availability
Stringent security because it is used only by internal services
Amazon apps that use Dynamo
Perform simple read/write ops on single, small ( < 1MB) data objects which are identified by a unique key.
best seller listsshopping cartscustomer preferencessession managementsales rankproduct catalogetc.
Design Considerations
“ … customers should be able to view and add items to their shopping cart even if disks are failing, network routes are flapping, or data centers are being destroyed by tornados.”
Design Considerations
“Always writeable”users must always be able to add/delete from the shopping cartno update is rejected because of failure or concurrent writedata must be replicated across data centersresolve conflicts during reads, not writes
Let each application decide for itself
Single administrative domainall nodes are trusted (no Byzantine failure) because service is not intended for external users
Design Considerations
Unstructured dataNo need for hierarchical namespacesNo need for relational schema
Very-high availability and low latency“At least 99.9% of read and write operations to be performed within a few hundred milliseconds”
“average” or “median” is not good enough
Avoid routing requests through multiple nodeswhich would slow things down
Avoid ACID guaranteesACID guarantees tend to have poor availability
Design Considerations
Incremental scalabilityAdding a single node should not affect the system significantly
Decentralization and symmetryAll nodes have the same responsibilitiesFavor P2P techniques over centralized controlNo single point of failure
Take advantage of node heterogeneityNodes with larger disks should store more data
Dynamo API
A key is associated with each stored item
Operations that are supported:get(key)
return item associated with key
put(key, context, item) write key,value pair into storage
The context encodes system metadata about the itemincluding version information
Dynamo API
A key is associated with each stored item
Operations that are supported:get(key)
locates object replicas associated with key and returns the object or list of objects along with version numbers
put(key, context, item) determines where the item replicas should be placed based on the item key and writes the replicas to disk
The context encodes system metadata about the itemincluding version information
Partitioning Algorithm
For scalability, Dynamo makes use of a large number of nodes
across clusters and data centers
Also for scalability, Dynamo must balance the loadsusing a hash function to map data items to nodes
To insure incremental scalability, Dynamo uses consistent hashing
Partitioning Algorithm
Consistent hashingHash function produces an m-bit numberwhich defines a circular name space
Each data item has a key and is mapped to a number in the name space obtained using Hash(key)Nodes are assigned numbers randomly in the name spaceData item is then assigned to the first clockwise node
the successor Succ() function
In consistent hashing the effect of adding a node is localizedOn average, K/n objects must be remapped (K = # of keys, n = # of nodes)
Load Distribution
Problem: Random assignment of node to position in ring may produce non-uniform distribution of dataSolution: virtual nodes
Assign several random numbers to each physical nodeOne corresponds to physical node, the others to virtual ones
AdvantagesIf node becomes unavailable, its load is easily and evenly dispersed across the available nodesWhen a node becomes available, it accepts a roughly equivalent amount of load from the other available nodesThe number of virtual nodes that a node is responsible for can be decided based on its capacity
Failures
Amazon has a number of data centersConsisting of a number of clusters of commodity machinesIndividual machines fail regularlySometimes entire data centers fail due to power outages, network partitions, tornados, etc.
To handle failuresitems are replicatedreplicas are not only spread across a cluster but across multiple data centers
Replication
Data is replicated at N nodes
Succ(key) = coordinator node
The coordinator replicates the object at the N-1 successor nodes in the ring, skipping virtual nodes corresponding to already used physical nodesPreference list: the list of nodes that store a particular keyThere are actually > N nodes on the preference list, in order to ensure N “healthy” nodes at all times.
Data Versioning
Dynamo provides eventual consistency
Updates can be propagated to replicas asynchronouslyput( ) call may return before all replicas have been updatedWhy? provide low latency and high availabilityImplication: a subsequent get( ) may return stale data
Some apps can be designed to work in this environmente.g., the “add-to/delete-from cart” operationIt’s okay to add to an old cart, as long as all versions of the cart are eventually reconciled
Note: eventual consistency
Data Versioning
Dynamo treats each modification as a new (& immutable) version of the object
Multiple versions can exist at the same timeUsually, new versions contain the old versions – no problem
Sometimes concurrent updates and failures generate conflicting versions
e.g., if there’s been a network partition
Parallel Version Branches
Vector clocks are used to identify causally related versions and parallel (concurrent) versions
For causally related versions, accept the final version as the “true” versionFor parallel (concurrent) versions, use some reconciliation technique to resolve the conflictReconciliation technique is app dependent
Typically this is handled by mergingFor add-to-cart operations, nothing is lostFor delete-from cart, deleted items might reappear after the reconciliation
Parallel Version Branches example
Dk([Sx,i], [Sy,j]):
Object Dk
with vector clock ([Sx,i], [Sy,j])
where[Sx,i] indicates i updates by server Sx
and[Sy,j] indicates j updates by server Sy
Execution of get( ) and put( )
Operations can originate at any node in the system
Clients mayRoute request through a load-balancing coordinator nodeUse client software that routes the request directly to the coordinator for that object
The coordinator contacts R nodes for reading and W nodes for writing, where R + W > N
“Sloppy Quorum”
put( ): the coordinator writes to the first N healthy nodes on the preference list
If W writes succeed, the write is considered to be successful
get( ): coordinator reads from N nodeswaits for R responses. If they agree, return valueIf they disagree, but are causally related, return the most recent valueIf they are causally unrelated apply app-specific reconciliation techniques and write back the corrected version
Hinted Handoff
What if a write operation can’t reach the first N nodes on the preference list?
To preserve availability and durability, store the replica temporarily on another node in the preference list
accompanied by a metadata “hint” that remembers where the replica should be storedthis (another) node will eventually deliver the update to the correct node when it recovers
Hinted handoff ensures that read and write operations don’t fail because of network partitioning or node failures.
Handling Permanent Failures
Hinted replicas may be lost before they can be returned to the right node.
Other problems may cause replicas to be lost or fall out of agreement
Merkle trees allow two nodes to compare a set of replicas and determine fairly easily
Whether or not they are consistentWhere the inconsistencies are
Merkle trees
Merkle trees have leaves whose values are hashes of the values associated with keys (one key/leaf)
Parent nodes contain hashes of their childrenEventually, root contains a hash that represents everything in that replica
Each node maintains a separate Merkle tree for each key range (the set of keys covered by a virtual node) it hosts
To detect inconsistency between two sets of replicas, compare the roots
Source of inconsistency can be detected by recursively comparing children
Membership and Failure Detection
Temporary failures of nodes are possible but shouldn’t cause load re-balancing
Additions and deletions of nodes are also explicitly executed by an administrator
A gossip-based protocol is used to ensure that every node eventually has a consistent view of its membership list
Members are the keys assigned to the ranges the node is responsible for
Gossip-based Protocol
Periodically, each node contacts another node in the network, randomly selected
Nodes compare their membership histories and reconcile them
Load Balancing for Additions and Deletions
When a node is added, it acquires key values from other nodes in the network.
Nodes learn of the added node through the gossip protocol, contact the node to offer their keys, which are then transferred after being acceptedWhen a node is removed, a similar process happens in reverse
Experience has shown that this approach leads to a relatively uniform distribution of key/value pairs across the system
Problem Technique Advantage
Partitioning Consistent Hashing Incremental scalability
High availability Vector clocks, reconciled Version size is decoupledfor writes during reads from update rates
Temporary Sloppy Quorum, Provides high availability &failures hinted handoff durability guarantee when
some of the replicas arenot available
Permanent Anti-entropy using Synchronizes divergent replicasfailures Merkle trees in the background
Membership & Gossip-based protocol Preserves symmetry and avoids failure detection having a centralized registry for
storing membership and nodeliveness information
Summary
Summary
High scalability, including incremental scalabilityVery high availability is possible, at the cost of consistencyApp developers can customize the storage system to emphasize performance, durability, or consistency
The primary parameters are N, R, and W
Dynamo shows that decentralization and eventual consistency can provide a satisfactory platform for hosting highly-available applications.
Dynamo vs BigTable
Different types of data storage, designed for different needsDynamo optimizes latencyBigTable emphasizes throughput
More preciselyDynamo tends to emphasize network partition fault-tolerance and availability, at the expense of consistencyBigTable tends to emphasize network partition fault-tolerance and consistency over availability
Brewer’s CAP theorem
Impossible for a distributed data store to simultaneously provide
Consistency (C)Availability (A)Partition-tolerance (P)
Conjectured by Brewer in 2000
Formally “proven” by Gilbert&Lynch in 2002
Brewer’s CAP theorem
Assume two nodes storing replicated data on opposite sides of a partition
Allowing at least one node to update state will cause the nodes to become inconsistent, thus forfeiting CLikewise, if the choice is to preserve consistency, one side of the partition must act as if it is unavailable, thus forfeiting AOnly when nodes communicate is it possible to preserve both consistency and availability, thereby forfeiting P
Naïve Implication (“2 out of 3” view)Since, for wide-area systems, designers cannot forfeit P, they must make a difficult choice between C and A
What about latency?
Latency and partitions are related
Operationally, the essence of CAP takes place during a partition-caused timeout, a period when the program must make a fundamental decision:
block/cancel the operation and thus decrease availability or proceed with the operation and thus risk inconsistency
The first results in high latency (waiting until partition is repaired) and the second results in possible inconsistency
Brewer’s CAP theorem
A more sophisticated viewBecause partitions are rare, there is little reason to forfeit C or A when the system is not partitionedThe choice between C and A can occur many times within the same system at very fine granularity
not only can subsystems make different choices, but the choice can change according to the operation or specific data or user
The 3 properties are more continuous than binaryAvailability is a percentage between 0 to 100 percentDifferent consistency models existDifferent kinds of system partition can be defined
Brewer’s CAP theorem
BigTable is a “CP type system”Dynamo is an “AP type system”Yahoo’s PNUTS is an “AP type system”
maintains remote copies asynchronouslymakes the “local” replica the master, which decreases latencyworks well in practice because single user data master is naturally located according to the user’s (normal) location
Facebook uses a “CP type system”the master copy is always in one locationuser typically has a closer but potentially stale copywhen users update their pages, the update goes to the master copy directly as do all the user’s reads for about 20 seconds, despite higher latency. After 20 seconds, the user’s traffic reverts to the closer copy.
“AP”, “CP” are really rough generalizations
Recovery
Recovery
Error recovery: replace a a present erroneous state with an error-free stateBackward recovery: bring system into a previously correct state
Need to record the system's state from time to time (checkpoints)Example:
Forward recovery: bring system to a correct new state from which it can continue to execute
Only works with known errorsExample:
Recovery
Error recovery: replace a a present erroneous state with an error-free stateBackward recovery: bring system into a previously correct state
Need to record the system's state from time to time (checkpoints)Example: retransmit message
Forward recovery: bring system to a correct new state from which it can continue to execute
Only works with known errorsExample: error correction
Backward recovery
Backward recovery is typically usedIt is more general
HoweverRecovery is expensiveSometimes we can't go back (e.g. a file is deleted)Checkpoints are expensive
Solution for the last point: message loggingSender-basedReceiver-based
Checkpoints: Common approach
Periodically make a “big” checkpoint
Then, more frequently, make an incremental addition to itFor example: the checkpoint could be copies of some filesor of a databaseLooking ahead, the incremental data could be “operations”run on the database since the last transaction finished (committed)
Problems with checkpoints
P and Q are interacting
Each makes independent checkpoints now and then
p
q
request
reply
Problems with checkpoints
Q crashes and rolls back to checkpoint
p
q
request
reply
Problems with checkpoints
Q crashes and rolls back to checkpoint
It will have “forgotten” message from P
p
q
request
Problems with checkpoints
… Yet Q may even have replied.
Who would care? Suppose reply was “OK to release the cash. Account has been debited”
p
q
request
reply
Two related concerns
First, Q needs to see that request again, so that it will reenter the state in which it sent the reply
Need to regenerate the input request
But if Q is non-deterministic, it might not repeat those actions even with identical input
So that might not be “enough”
Rollback can leave inconsistency!
In this example, we see that checkpoints must somehow be coordinated with communication
If we allow programs to communicate and don’t coordinate checkpoints with message passing, system state becomes inconsistent even if individual processes are otherwise healthy
More problems with checkpoints
P crashes and rolls back
p
q
request
reply
More problems with checkpoints
P crashes and rolls back
Will P “reissue” the same request? Recall our non-determinism assumption: it might not!
p
q
request
reply
Solution?
One idea: if a process rolls back, roll others back to a consistent state
If a message was sent after checkpoint, …If a message was received after checkpoint, … Assumes channels will be “empty” after doing this
Solution?
One idea: if a process rolls back, roll others back to a consistent state
If a message was sent after checkpoint, roll receiver back to a state before that message was receivedIf a message was received after checkpoint roll the sender back to a state prior to sending itAssumes channels will be “empty” after doing this
Solution?
Q crashes and rolls back
p
q
request
reply
Solutions?
Q crashes and rolls back
p
q
request
reply
q rolled back to a state before this was received, or reply was sent
Solution?
P must also roll back
Now it won’t upset us if P happens not to resend the same request
p
q
Implementation Implementing independent checkpointing requires that dependencies are recorded so processes can jointly roll back to a consistent global stateLet CPi(m) be the m-th checkpoint taken by process Pi and let INTi(m) denote the interval between CPi(m-1) and CPi(m)
When Pi sends a message in interval INTi(m) Pi attaches to it the pair (i,m)
When Pj receives a message with attachment (i,m) in interval INTj(n) Pj records the dependency INTi(m) INTj(n)
When Pj takes checkpoint CPj(n), it logs this dependency as well
When Pi rolls back to checkpoint CPi(m-1): we need to ensure that all processes that have received messages from Pi
sent in interval INTi(m) are rolled back to a checkpoint preceding the receipt of such messages…
Implementation Implementing independent checkpointing requires that dependencies are recorded so processes can jointly roll back to a consistent global stateLet CPi(m) be the m-th checkpoint taken by process Pi and let INTi(m) denote the interval between CPi(m-1) and CPi(m)
When Pi sends a message in interval INTi(m) Pi attaches to it the pair (i,m)
When Pj receives a message with attachment (i,m) in interval INTj(n) Pj records the dependency INTi(m) INTj(n)
When Pj takes checkpoint CPj(n), it logs this dependency as well
When Pi rolls back to checkpoint CPi(m-1): Pj will have to roll back to at least checkpoint CPj(n-1) Further rolling back may be necessary…
Problems with checkpoints
But now we can get a cascade effect
p
q
Problems with checkpoints
Q crashes, restarts from checkpoint…
p
q
Problems with checkpoints
Forcing P to rollback for consistency…
p
q
Problems with checkpoints
New inconsistency forces Q to rollback ever further
p
q
Problems with checkpoints
New inconsistency forces P to rollback ever further
p
q
This is a “cascaded” rollback
Or “domino effect”
It arises when the creation of checkpoints is uncoordinated w.r.t. communication
Can force a system to roll back to initial stateClearly undesirable in the extreme case…Could be avoided in our example if we had a log for the channel from P to Q
Sometimes action is “external” to system, and we can’t roll back
Suppose that P is an ATM machineAsks: Can I give Ken $100Q debits account and says “OK”P gives out the money
We can’t roll P back in this case since the money is already gone
Bigger issue is non-determinism
P’s actions could be tied to something randomFor example, perhaps a timeout caused P to send this message
After rollback these non-deterministic events might occur in some other order
Results in a different behavior, like not sending that same request… yet Q saw it, acted on it, and even replied!
Issue has two sides
One involves reconstructing P’s message to Q in our examples
We don’t want P to roll back, since it might not send the same messageBut if we had a log with P’s message in it we would be fine, could just replay it
The other is that Q might not send the same response (non-determinism)
If Q did send a response and doesn’t send the identical one again, we must roll P back
Options?
One idea is to coordinate the creation of checkpoints and logging of messages
In effect, find a point at which we can pause the systemAll processes make a checkpoint in a coordinated way: the consistent snapshot (seen that, done that)Then resume
Why isn’t this common?
Often we can’t control processes we didn’t code ourselvesMost systems have many black-box componentsCan’t expect them to implement the checkpoint/rollback policy
Hence it isn’t really practical to do coordinated checkpointing if it includes system components
Why isn’t this common?
Further concern: not every process can make a checkpoint “on request”
Might be in the middle of a costly computation that left big data structures aroundOr might adopt the policy that “I won’t do checkpoints while I’m waiting for responses from black box components”
This interferes with coordination protocols
Implications?
Ensure that devices, timers, etc, can behave identically if we roll a process back and then restart it
Knowing that programs will re-do identical actions eliminates need to cascade rollbacks
Implications?
Must also cope with thread preemptionOccurs when we use lightweight threads, as in Java or C#Thread scheduler might context switch at times determined by when an interrupt happensMust force the same behavior again later, when restarting, or program could behave differently
Determinism
Despite these issues, often see mechanisms that assume determinism
Basically they are sayingEither don’t use threads, timers, I/O from multiple incoming channels, shared memory, etcOr use a “determinism forcing mechanism”
With determinism…
We can revisit the checkpoint rollback problem and do much better
Eliminates need for cascaded rollbacksBut we do need a way to replay the identical inputs that were received after the checkpoint was made
Forces us to think about keeping logs of the channels between processes
Two popular options
Receiver based loggingLog received messages; like an “extension” of the checkpoint
Sender based loggingLog messages when you send them, ensures you can resend them if needed
Why do these work?
Recall the reasons for cascaded rollbackA cascade occurs if
Q received a message and replied to it, then rolls back to “before” that happenedWith message logging, Q can regenerate the input and re-read the message
With these varied options
When Q rolls back we can
Re-run Q with identical inputs ifQ is deterministic, orNobody saw messages from Q after checkpoint state was recorded, orWe roll back the receivers of those messages
An issue: deterministic programs often crash in the identical way if we force identical execution