Upload
jonathan-sutton
View
212
Download
0
Tags:
Embed Size (px)
Citation preview
Networks and Distributed Systems
CSE 490h
This presentation contains content licensed under the Creative Commons Attribution 2.5 License.
Outline Networking Remote Procedure Calls (RPC) Transaction Processing Systems Failure & Reliability
Fundamentals of Networking
Sockets: The Internet = tubes?
A socket is the basic network interface Provides a two-way “pipe” abstraction
between two applications Client creates a socket, and connects to
the server, who receives a socket representing the other side
Ports
Within an IP address, a port is a sub-address identifying a listening program
Allows multiple clients to connect to a server at once
Example: Web Server (1/3)
1) Server creates a socket attached to port 80
80
The server creates a listener socket attached to a specific port. 80 is the agreed-upon port number for web traffic.
Example: Web Server (2/3)
The client-side socket is still connected to a port, but the OS chooses a random unused port number
When the client requests a URL (e.g., “www.google.com”), its OS uses a system called DNS to find its IP address.
2) Client creates a socket and connects to host
80Connect: 66.102.7.99 : 80
(anon)
Example: Web Server (3/3)
Server chooses a randomly-numbered port to handle this particular client
Listener is ready for more incoming connections, while we process the current connection in parallel
3) Server accepts connection, gets new socket for client
80
(anon) (anon)
4) Data flows across connected socket as a “stream”, just like a file
What makes this work? Underneath the socket layer are several more
protocols Most important are TCP and IP (which are used
hand-in-hand so often, they’re often spoken of as one protocol: TCP/IP)
Your dataTCP header
IP header
Even more low-level protocols handle how data is sent over Ethernet wires, or how bits are sent through the air using 802.11 wireless…
IP: The Internet Protocol
Defines the addressing scheme for computers
Encapsulates internal data in a “packet”
Does not provide reliability Just includes enough information for the
data to tell routers where to send it
TCP: Transmission Control Protocol Built on top of IP Introduces concept of “connection” Provides reliability and ordering
Your dataTCP header
IP header
Why is This Necessary?
Not actually tube-like “underneath the hood” Unlike phone system (circuit switched), the packet
switched Internet uses many routes at once
you www.google.com
Networking Issues
If a party to a socket disconnects, how much data did they receive?
… Did they crash? Or did a machine in the middle?
Can someone in the middle intercept/modify our data?
Traffic congestion makes switch/router topology important for efficient throughput
Remote Procedure Calls (RPC)
How RPC Doesn’t Work Regular client-server protocols involve
sending data back and forth according to a shared state
Client: Server:
HTTP/1.0 index.html GET
200 OK
Length: 2400
(file data)
HTTP/1.0 hello.gif GET
200 OK
Length: 81494
…
Remote Procedure Call RPC servers will call arbitrary functions in
dll, exe, with arguments passed over the network, and return values back over network
Client: Server:
foo.dll,bar(4, 10, “hello”)
“returned_string”
foo.dll,baz(42)
err: no such function
…
Possible Interfaces
RPC can be used with two basic interfaces: synchronous and asynchronous
Synchronous RPC is a “remote function call” – client blocks and waits for return val
Asynchronous RPC is a “remote thread spawn”
Synchronous RPC
s = RPC(server_name, “foo.dll”, get_hello, arg, arg, arg…)
RPC dispatcher
String get_hello(a, b, c){ … return “some hello str!”;}
foo.dll:
print(s);...
client server
time
Asynchronous RPC
h = Spawn(server_name, “foo.dll”, long_runner, x, y…)
RPC dispatcher
String long_runner(x, y){ … return new GiantObject();}
foo.dll:
GiantObject myObj = Sync(h);
client server
time
(More code
...
keeps running…)
Asynchronous RPC 2: Callbacks
h = Spawn(server_name, “foo.dll”, callback, long_runner, x, y…)
RPC dispatcher
String long_runner(x, y){ … return new Result();}
foo.dll:
void callback(o) { Uses Result}
client server
time
(More code
...
runs…)
Thread spawns:
Wrapper Functions
Writing rpc_call(foo.dll, bar, arg0, arg1..) is poor formConfusing codeBreaks abstraction
Wrapper function makes code cleanerbar(arg0, arg1); //just write this; calls “stub”
More Design Considerations
Who can call RPC functions? Anybody? How do you handle multiple versions of a
function? Need to marshal objects How do you handle error conditions? Numerous protocols: DCOM, CORBA,
JRMI…
(break)
Transaction Processing Systems
(We’re using the blue cover sheets on the TPS reports now…)
TPS: Definition
A system that handles transactions coming from several sources concurrently
Transactions are “events that generate and modify data stored in an information system for later retrieval”*
* http://en.wikipedia.org/wiki/Transaction_Processing_System
Reliability Demands
Support partial failureTotal system must support graceful decline in
application performance rather than a full halt
Reliability Demands
Data Recoverability If components fail, their workload must be
picked up by still-functioning units
Reliability Demands
Individual RecoverabilityNodes that fail and restart must be able to
rejoin the group activity without a full group restart
Reliability Demands
ConsistencyConcurrent operations or partial internal
failures should not cause externally visible nondeterminism
Reliability Demands
ScalabilityAdding increased load to a system should not
cause outright failure, but a graceful decline Increasing resources should support a
proportional increase in load capacity
Reliability Demands
SecurityThe entire system should be impervious to
unauthorized accessRequires considering many more attack
vectors than single-machine systems
Ken Arnold, CORBA designer:
“Failure is the defining difference between distributed and local programming”
Component Failure
Individual nodes simply stop
Data Failure
Packets omitted by overtaxed router Or dropped by full receive-buffer in kernel Corrupt data retrieved from disk or net
Network Failure
External & internal links can dieSome can be routed around in ring or mesh
topologyStar topology may cause individual nodes to
appear to haltTree topology may cause “split”Messages may be sent multiple times or not
at all or in corrupted form…
Timing Failure
Temporal properties may be violatedLack of “heartbeat” message may be
interpreted as component haltClock skew between nodes may confuse
version-aware data readers
Byzantine Failure
Difficult-to-reason-about circumstances ariseCommands sent to foreign node are not
confirmed: What can we reason about the state of the system?
Malicious Failure
Malicious (or maybe naïve) operator injects invalid or harmful commands into system
Preparing for Failure
Distributed systems must be robust to these failure conditions
But there are lots of pitfalls…
The Eight Design Fallacies
The network is reliable. Latency is zero. Bandwidth is infinite. The network is secure. Topology doesn't change. There is one administrator. Transport cost is zero. The network is homogeneous.
-- Peter Deutsch and James Gosling, Sun Microsystems
Dealing With Component Failure
Use heartbeats to monitor component availability
“Buddy” or “Parent” node is aware of desired computation and can restart it elsewhere if needed
Individual storage nodes should not be the sole owner of dataPitfall: How do you keep replicas consistent?
Dealing With Data Failure
Data should be check-summed and verified at several pointsNever trust another machine to do your data
validation! Sequence identifiers can be used to
ensure commands, packets are not lost
Dealing With Network Failure
Have well-defined split policyNetworks should routinely self-discover
topologyWell-defined arbitration/leader election
protocols determine authoritative components Inactive components should gracefully clean up
and wait for network rejoin
Dealing With Other Failures
Individual application-specific problems can be difficult to envision
Make as few assumptions about foreign machines as possible
Design for security at each step
Key Features of TPS: ACID
“ACID” is the acronym for the features a TPS must support:
Atomicity – A set of changes must all succeed or all fail Consistency – Changes to data must leave the data in
a valid state when the full change set is applied Isolation – The effects of a transaction must not be
visible until the entire transaction is complete Durability – After a transaction has been committed
successfully, the state change must be permanent.
Atomicity & Durability
What happens if we write half of a transaction to disk and the power goes out?
Logging: The Undo Buffer
1. Database writes to log the current values of all cells it is going to overwrite
2. Database overwrites cells with new values
3. Database marks log entry as committed
If db crashes during (2), we use the log to roll back the tables to prior state
Consistency: Data Types
Data entered in databases have rigorous data types associated with them, and explicit ranges
Does not protect against all errors (entering a date in the past is still a valid date, etc), but eliminates tedious programmer concerns
Consistency: Foreign Keys
Purchase_idPurchaser_nameItem_purchased FOREIGN
Item_idItem_nameCost
Database designers declare that fields are indices into the keys of another table
Database ensures that target key exists before allowing value in source field
Isolation
Using mutual-exclusion locks, we can prevent other processes from reading data we are in the process of writing
When a database is prepared to commit a set of changes, it locks any records it is going to update before making the changes
Faulty Locking
Lock (A)
Unlock (A)
Lock (B)
Unlock (B)
Unlock (A)
Lock (A)
Write to table A
Read from A
Write to table B
time
Locking alone does not ensure isolation!
Changes to table A are visible before changes to table B – this is not an isolated transaction
Two-Phase Locking
After a transaction has released any locks, it may not acquire any new locks
Effect: The lock set owned by a transaction has a “growing” phase and a “shrinking” phase
Lock (A)
Unlock (A)
Lock (B)
Unlock (B)
Unlock (A)
Lock (A)
Write to table A
Read from A
Write to table B
time
Relationship to Distributed Comp
At the heart of a TPS is usually a large database server
Several distributed clients may connect to this server at points in time
Database may be spread across multiple servers, but must still maintain ACID
Conclusions
We’ve seen 3 layers that make up a distributed system
Designing a large distributed system involves engineering tradeoffs at each of these levels
Appreciating subtle concerns at each level requires diving past the abstractions, but abstractions are still useful in general