61
Distributed Filesystems: NFS and GFS Costin Raiciu Advanced Topics in Distributed Systems 18 th October, 2011 Slides courtesy of Brad Karp (UCL) and Robert Morris (MIT)

Distributed Filesystems: NFS and GFS

  • Upload
    arama

  • View
    42

  • Download
    0

Embed Size (px)

DESCRIPTION

Distributed Filesystems: NFS and GFS. Costin Raiciu. Advanced Topics in Distributed Systems 18 th October, 2011 Slides courtesy of Brad Karp (UCL) and Robert Morris (MIT). Distributed Filesystems. Why might we need them? Sharing data : many users read/write the same files - PowerPoint PPT Presentation

Citation preview

Page 1: Distributed Filesystems: NFS and GFS

Distributed Filesystems:NFS and GFS

Costin Raiciu

Advanced Topics in Distributed Systems18th October, 2011

Slides courtesy of Brad Karp (UCL) and Robert Morris (MIT)

Page 2: Distributed Filesystems: NFS and GFS

Distributed Filesystems

• Why might we need them?– Sharing data: many users read/write the same

files– Files too big to store on a single machine– Better fault tolerance – via replication across

machines – Better performance – if reads are serviced by

many replicas

Page 3: Distributed Filesystems: NFS and GFS

What defines a Distributed FS?

• Need some common namespace (dir/open/close)

– Metadata• which files are in this directory? • where are the contents of this file stored?

– This is typically small in size– We care a lot about consistency

• Need a way to access data (read/write)– There will be lots of data

Page 4: Distributed Filesystems: NFS and GFS

How do we design a distributed filesystem?

• Should we use a centralized server?– Easier to manage– All clients connect to it– No fault tolerance (but can be added)

• Or should we use many servers?• Who holds metadata?• Who holds data?

Page 5: Distributed Filesystems: NFS and GFS

Are we allowed to change apps?

• It influences the design quite a bit!• Apps unchanged– Great deployability– Bound to Unix local filesystem semantics

• Apps changed– We can do whatever we want– But optimal design depends on app!

Page 6: Distributed Filesystems: NFS and GFS

Network File System

Page 7: Distributed Filesystems: NFS and GFS

7

NFS Is Relevant

• Original paper from 1985• Very successful, still widely used today• Early result; much subsequent research in

networked filesystems “fixing shortcomings of NFS”

Page 8: Distributed Filesystems: NFS and GFS

8

Why Did They Build NFS?

• Sharing data: many users reading/writing same files but running on separate machines

• Manageability: ease of backing up one server• Disks may be expensive (true when NFS built;

no longer true)– Diskless workstations

Page 9: Distributed Filesystems: NFS and GFS

9

Goals for NFS

• Easily deployed– Easy to add to existing UNIX systems

• Work with existing, unmodified apps:– Same semantics as local UNIX filesystem

• Compatible with non-UNIX OSes– Wire protocol cannot be too UNIX-specific

• Efficient “enough”– Needn’t offer same performance as local UNIX

filesystem

Page 10: Distributed Filesystems: NFS and GFS

10

NFS Architecture

Server (w/disk)Clients LANApp1 App2

UserKernelFilesyste

m

syscalls

RPCs

Page 11: Distributed Filesystems: NFS and GFS

11

Simple Example: Reading a File

• What RPCs would we expect for:fd = open(“f”, 0);read(fd, buf, 8192);close(fd);

Page 12: Distributed Filesystems: NFS and GFS

12

Simple Example: NFS RPCs for Reading a File

• Where are RPCs for close()?

Page 13: Distributed Filesystems: NFS and GFS

NFS Server is stateless

• Simplifies crash recovery• Allows it to scale to many concurrent clients– No need to remember them!

• NFS runs mainly on top of UDP

Page 14: Distributed Filesystems: NFS and GFS

14

File Handle: Function and Contents

• 32-byte name, opaque to client• Identifies object on remote server• Must be included in all NFS RPCs• File handle contains:– filesystem ID– i-number (essentially, physical block ID on disk)– generation number

Page 15: Distributed Filesystems: NFS and GFS

15

Generation Number: Motivation

• Client 1 opens file• Client 2 opens same file• Client 1 deletes the file, creates new one• UNIX local filesystem semantics:– Client 2 (App 2) sees old file

• In NFS, suppose server re-uses i-node– Same i-number for new file as old– RPCs from client 2 refer to new file’s i-number– Client 2 sees new file!

Page 16: Distributed Filesystems: NFS and GFS

16

Generation Number: Solution

• Each time server frees i-node, increments its generation number– Client 2’s RPCs now use old file handle– Server can distinguish requests for old vs. new file

• Semantics still not same as local UNIX fs!– Apps 1 and 2 sharing local fs: client 2 will see old

file– Clients 1 and 2 on different workstations sharing

NFS fs: client 2 gets error “stale file handle”

Trade precise UNIX fs semantics for simplicity

Page 17: Distributed Filesystems: NFS and GFS

17

Why i-numbers, not Filenames?

• Local UNIX fs: client 1 reads dir2/f• NFS with pathnames: client 1 reads dir1/f• Concurrent access by clients can change object

referred to by filename– Why not a problem in local UNIX fs?

• i-number refers to actual object, not filename

Page 18: Distributed Filesystems: NFS and GFS

18

Where Does Client Learn File Handles?

• Before READ, client obtains file handle using LOOKUP or CREATE

• Client stores returned file handle in vnode• Client’s file descriptor refers to vnode• Where does client get very first file handle?

Page 19: Distributed Filesystems: NFS and GFS

19

NFS Implementation Layering

• Why not just send syscalls over wire?• UNIX semantics defined in terms of files, not just filenames:

file’s identity is i-number on disk• Even after rename, all these refer to same object as before:

– File descriptor– Home directory– Cache contents

vnode’s purpose: remember file handles!

Page 20: Distributed Filesystems: NFS and GFS

20

Example: Creating a File over NFS

• Suppose client does:fd = creat(“d/f”, 0666);write(fd, “foo”, 3);close(fd);

• RPCs sent by client:– newfh = LOOKUP (fh, “d”)– filefh = CREATE (newfh, “f”, 0666)– WRITE (filefh, 0, 3, “foo”)

Page 21: Distributed Filesystems: NFS and GFS

21

Server Crashes and Robustness

• Suppose server crashes and reboots• Will client requests still work?– Will client’s file handles still make sense?– Yes! File handle is disk address of i-node

• What if server crashes just after client sends an RPC?– Before server replies: client doesn’t get reply,

retries• What if server crashes just after replying to

WRITE RPC?

Page 22: Distributed Filesystems: NFS and GFS

22

WRITE RPCs and Crash Robustness

• What must server do to ensure correct behavior when crash after WRITE from client?

• Client’s data safe on disk• i-node with new block number and new length

safe on disk• Indirect block safe on disk• Three writes, three seeks: 45 ms• 22 WRITEs/s, so 180 KB/s

Page 23: Distributed Filesystems: NFS and GFS

23

WRITEs and Throughput

• Design for higher write throughput:– Client writes entire file sequentially at Ethernet

speed (few MB/s)– Update inode, &c. afterwards

• Why doesn’t NFS use this approach?– What happens if server crashes and reboots?– Does client believe write completed?

• Improved in NFSv3: WRITEs async, COMMIT on close()

Page 24: Distributed Filesystems: NFS and GFS

24

Client Caches in NFS

• Server caches disk blocks• Client caches file content blocks, some clean,

some dirty• Client caches file attributes• Client caches name-to-file-handle mappings• Client caches directory contents• General concern: what if client A caches data,

but client B changes it?

Page 25: Distributed Filesystems: NFS and GFS

25

Multi-Client Consistency

• Real-world examples of data cached on one host, changed on another:– Save in emacs on one host, “make” on other host– “make” on one host, run program on other host

• (No problem if users all run on one workstation, or don’t share files)

Page 26: Distributed Filesystems: NFS and GFS

26

Consistency Protocol: First Try

• On every read(), client asks server whether file has changed– if not, use cached data for file– if so, issue READ RPCs to get fresh data from

server• Is this protocol sufficient to make each read()

see latest write()?• What’s effect on performance?• Do we need such strong consistency?

Page 27: Distributed Filesystems: NFS and GFS

27

Compromise:Close-to-Open Consistency

• Implemented by most NFS clients• Contract:– if client A write()s a file, then close()s it,– then client B open()s the file, and read()s it,– client B’s reads will reflect client A’s writes

• Benefit: clients need only contact server during open() and close()—not on every read() and write()

Page 28: Distributed Filesystems: NFS and GFS

28

Compromise:Close-to-Open Consistency

Fixes “emacs save, then make” example……so long as user waits until emacs says it’s done saving file!

Page 29: Distributed Filesystems: NFS and GFS

29

Close-to-Open Implementation

• FreeBSD UNIX client (not part of protocol spec):– Client keeps file mtime and size for each cached file block– close() starts WRITEs for all file’s dirty blocks– close() waits for all of server’s replies to those WRITEs– open() always sends GETATTR to check file’s mtime and

size, caches file attributes– read() uses cached blocks only if mtime/length have not

changed– client checks cached directory contents with GETATTR and

ctime

Page 30: Distributed Filesystems: NFS and GFS

30

Name Caching in Practice

• Name-to-file-handle cache not always checked for consistency on each LOOKUP– If file deleted, may get “stale file handle” error

from server– If file renamed and new file created with same

name, may even get wrong file’s contents

Page 31: Distributed Filesystems: NFS and GFS

31

NFS: Secure?

• What prevents unauthorized users from issuing RPCs to an NFS server?– e.g., remove files, overwrite data, &c.

• What prevents unauthorized users from forging NFS replies to an NFS client?– e.g., return data other than on real server

IP-address-based authentication of mount requests weak at best; no auth of server to clientSecurity not a first-order goal in original NFS

Page 32: Distributed Filesystems: NFS and GFS

32

Limitations of NFS

• Security: what if untrusted users can be root on client machines?

• Scalability: how many clients can share one server?– Writes always go through to server– Some writes are to “private,” unshared files that

are deleted soon after creation• Can you run NFS on a large, complex network?– Effects of latency? Packet loss? Bottlenecks?

Despite its limitations, NFS a huge success:Simple enough to build for many OSesCorrect enough and performs well enough to be practically useful in deployment

Page 33: Distributed Filesystems: NFS and GFS

GFS: The Google File System

Page 34: Distributed Filesystems: NFS and GFS

34

Motivating Application: Google

• Crawl the whole web• Store it all on “one big disk”• Process users’ searches on “one big CPU”• More storage, CPU required than one PC can

offer• Custom parallel supercomputer: expensive (so

much so, not really available today)

Page 35: Distributed Filesystems: NFS and GFS

35

Cluster of PCs as Supercomputer• Lots of cheap PCs, each with disk and CPU– High aggregate storage capacity– Spread search processing across many CPUs

• How to share data among PCs?• NFS: share fs from one server, many clients– Goal: mimic original UNIX local fs semantics– Compromise: close-to-open consistency (performance)– Fault tolerance?

• Ivy: shared virtual memory (we will discuss later)– Fine-grained, relatively strong consistency at load/store

level– Fault tolerance?

GFS: File system for sharing data on clusters, designed with Google’s application workload specifically in mind

Page 36: Distributed Filesystems: NFS and GFS

36

Google Platform Characteristics

• 100s to 1000s of PCs in cluster• Cheap, commodity parts in PCs• Many modes of failure for each PC:– App bugs, OS bugs– Human error– Disk failure, memory failure, net failure, power

supply failure– Connector failure

• Monitoring, fault tolerance, auto-recovery essential

Page 37: Distributed Filesystems: NFS and GFS

37

Google File System: Design Criteria

• Detect, tolerate, recover from failures automatically• Large files, >= 100 MB in size• Large, streaming reads (>= 1 MB in size)– Read once

• Large, sequential writes that append– Write once

• Concurrent appends by multiple clients (e.g., producer-consumer queues)– Want atomicity for appends without synchronization

overhead among clients

Page 38: Distributed Filesystems: NFS and GFS

38

GFS: Architecture

• One master server (state replicated on backups)

• Many chunk servers (100s – 1000s)– Spread across racks; intra-rack b/w greater than

inter-rack– Chunk: 64 MB portion of file, identified by 64-bit,

globally unique ID• Many clients accessing same and different files

stored on same cluster

Page 39: Distributed Filesystems: NFS and GFS

39

GFS: Architecture (2)

Page 40: Distributed Filesystems: NFS and GFS

40

Master Server

• Holds all metadata:– Namespace (directory hierarchy)– Access control information (per-file)– Mapping from files to chunks– Current locations of chunks (chunkservers)

• Manages chunk leases to chunkservers• Garbage collects orphaned chunks• Migrates chunks between chunkservers

Holds all metadata in RAM; very fast operations on file system metadata

Page 41: Distributed Filesystems: NFS and GFS

41

Chunkserver

• Stores 64 MB file chunks on local disk using standard Linux filesystem, each with version number and checksum

• Read/write requests specify chunk handle and byte range

• Chunks replicated on configurable number of chunkservers (default: 3)

• No caching of file data (beyond standard Linux buffer cache)

Page 42: Distributed Filesystems: NFS and GFS

42

Client

• Issues control (metadata) requests to master server

• Issues data requests directly to chunkservers• Caches metadata• Does no caching of data– No consistency difficulties among clients– Streaming reads (read once) and append writes

(write once) don’t benefit much from caching at client

Page 43: Distributed Filesystems: NFS and GFS

43

Client API

• Is GFS a filesystem in traditional sense?– Implemented in kernel, under vnode layer?– Mimics UNIX semantics?

• No; a library apps can link in for storage access• API:– open, delete, read, write (as expected)– snapshot: quickly create copy of file– append: at least once, possibly with gaps and/or

inconsistencies among clients

Page 44: Distributed Filesystems: NFS and GFS

44

Client Read

• Client sends master:– read(file name, chunk index)

• Master’s reply:– chunk ID, chunk version number, locations of replicas

• Client sends “closest” chunkserver w/replica:– read(chunk ID, byte range)– “Closest” determined by IP address on simple rack-based

network topology• Chunkserver replies with data

Page 45: Distributed Filesystems: NFS and GFS

45

Client Write

• Some chunkserver is primary for each chunk– Master grants lease to primary (typically for 60 sec.)– Leases renewed using periodic heartbeat messages

between master and chunkservers• Client asks server for primary and secondary replicas

for each chunk• Client sends data to replicas in daisy chain– Pipelined: each replica forwards as it receives– Takes advantage of full-duplex Ethernet links

Page 46: Distributed Filesystems: NFS and GFS

46

Client Write (2)

• All replicas acknowledge data write to client• Client sends write request to primary• Primary assigns serial number to write request,

providing ordering• Primary forwards write request with same serial

number to secondaries• Secondaries all reply to primary after completing

write• Primary replies to client

Page 47: Distributed Filesystems: NFS and GFS

47

Client Write (3)

Page 48: Distributed Filesystems: NFS and GFS

48

Client Record Append• Google uses large files as queues between multiple

producers and consumers• Same control flow as for writes, except…• Client pushes data to replicas of last chunk of file• Client sends request to primary• Common case: request fits in current last chunk:– Primary appends data to own replica– Primary tells secondaries to do same at same byte offset in

theirs– Primary replies with success to client

Page 49: Distributed Filesystems: NFS and GFS

49

Client Record Append (2)• When data won’t fit in last chunk:– Primary fills current chunk with padding– Primary instructs other replicas to do same– Primary replies to client, “retry on next chunk”

• If record append fails at any replica, client retries operation– So replicas of same chunk may contain different data—

even duplicates of all or part of record data• What guarantee does GFS provide on success?– Data written at least once in atomic unit

Page 50: Distributed Filesystems: NFS and GFS

50

GFS: Consistency Model

• Changes to namespace (i.e., metadata) are atomic– Done by single master server!– Master uses log to define global total order of namespace-

changing operations• Data changes more complicated• Consistent: file region all clients see as same,

regardless of replicas they read from• Defined: after data mutation, file region that is

consistent, and all clients see that entire mutation

Page 51: Distributed Filesystems: NFS and GFS

51

GFS: Data Mutation Consistency

• Record append completes at least once, at offset of GFS’ choosing

• Apps must cope with Record Append semantics

Write Record Appendserial success

defineddefined

interspersed with inconsistentconcurrent

successesconsistent

but undefined

failure inconsistent

Page 52: Distributed Filesystems: NFS and GFS

52

Applications andRecord Append Semantics

• Applications should include checksums in records they write using Record Append– Reader can identify padding / record fragments

using checksums• If application cannot tolerate duplicated

records, should include unique ID in record– Reader can use unique IDs to filter duplicates

Page 53: Distributed Filesystems: NFS and GFS

53

Logging at Master

• Master has all metadata information– Lose it, and you’ve lost the filesystem!

• Master logs all client requests to disk sequentially

• Replicates log entries to remote backup servers

• Only replies to client after log entries safe on disk on self and backups!

Page 54: Distributed Filesystems: NFS and GFS

54

Chunk Leases and Version Numbers

• If no outstanding lease when client requests write, master grants new one

• Chunks have version numbers– Stored on disk at master and chunkservers– Each time master grants new lease, increments

version, informs all replicas• Master can revoke leases– e.g., when client requests rename or snapshot of

file

Page 55: Distributed Filesystems: NFS and GFS

55

What If the Master Reboots?

• Replays log from disk– Recovers namespace (directory) information– Recovers file-to-chunk-ID mapping

• Asks chunkservers which chunks they hold– Recovers chunk-ID-to-chunkserver mapping

• If chunk server has older chunk, it’s stale– Chunk server down at lease renewal

• If chunk server has newer chunk, adopt its version number– Master may have failed while granting lease

Page 56: Distributed Filesystems: NFS and GFS

56

What if Chunkserver Fails?

• Master notices missing heartbeats• Master decrements count of replicas for all

chunks on dead chunkserver• Master re-replicates chunks missing replicas in

background– Highest priority for chunks missing greatest

number of replicas

Page 57: Distributed Filesystems: NFS and GFS

57

File Deletion

• When client deletes file:– Master records deletion in its log– File renamed to hidden name including deletion

timestamp• Master scans file namespace in background:– Removes files with such names if deleted for longer than 3

days (configurable)– In-memory metadata erased

• Master scans chunk namespace in background:– Removes unreferenced chunks from chunkservers

Page 58: Distributed Filesystems: NFS and GFS

What About Small Files?

• Most files stored in GFS are multi-GB; a few are shorter

• Instructive case: storing a short executable in GFS, executing on many clients simultaneously– 3 chunkservers storing executable overwhelmed

by many clients’ concurrent requests– App-specific fix: replicate such files on more

chunkservers; stagger app start times

58

Page 59: Distributed Filesystems: NFS and GFS

Write Performance (Distinct Files)

59

Page 60: Distributed Filesystems: NFS and GFS

Record Append Performance (Same File)

60

Page 61: Distributed Filesystems: NFS and GFS

61

GFS: Summary• Success: used actively by Google to support search

service and other applications– Availability and recoverability on cheap hardware– High throughput by decoupling control and data– Supports massive data sets and concurrent appends

• Semantics not transparent to apps– Must verify file contents to avoid inconsistent regions,

repeated appends (at-least-once semantics)• Performance not good for all apps– Assumes read-once, write-once workload (no client

caching!)