24
CS414 Review Session

CS414 Review Session

Embed Size (px)

DESCRIPTION

CS414 Review Session. Address Translation. Example. Logical Address: 32 bits Number of segments per process: 8 Page size: 2 KB Page table entry size: 2B Physical Memory: 32MB Paged Segmentation 2 level paging. Logical Address Space. Total Number of bits 32 - PowerPoint PPT Presentation

Citation preview

Page 1: CS414 Review Session

CS414 Review Session

Page 2: CS414 Review Session

Address Translation

Page 3: CS414 Review Session

Example

• Logical Address: 32 bits

• Number of segments per process: 8

• Page size: 2 KB

• Page table entry size: 2B

• Physical Memory: 32MB

• Paged Segmentation

• 2 level paging

Page 4: CS414 Review Session

Logical Address Space

• Total Number of bits 32• Page Offset: 11 bits (2KB = 211B)• Segment Number: 3 bits (8 = 23)• Number of pages per segment: 218 (32-3-11=16)• Number of page table entries in one page of page

table: 1K (2KB/2B)• Page number in inner page table: 10 bits (1K =

210) • Page number in outer page table: 8 bits (18-10)

Page 5: CS414 Review Session

Segment Table

• Number of entries = 8

• Width of each entry (sum of)– Base Address of outer page table: 14 bits

• Number of page frames = 16K (32MB/2KB)

– Length of Segment: 29 bits (32 – 3)– Miscellaneous items

Page 6: CS414 Review Session

Page Table

• Outer Page Table:– Number of entries = 28

– Width of entry (sum of)• Page frame number of inner page table: 14 bits

• Miscellaneous bits (total 2B specified)

• Inner Page Table– Number of entries = 210

– Width: same as outer page table

Page 7: CS414 Review Session

Translation Look-aside Buffer

• Just an Associative Cache

• Number of entries (pre fixed size)

• Width of each entry (sum of)– Key: segment#+page# = 3 + 18 = 21 bits

• Some TLBs may also use process IDs

– Value: page frame# = 14 bits

Page 8: CS414 Review Session

The Page Size Issue

• With a very small page size, each page matches the code that is actually used page faults are low

• Increased page size causes each page to contain code that is not used Fewer pages in memory Page faults rise. (Thrashing)

• Small pages large page tables costly translation

• 2KB to 8KB

Page 9: CS414 Review Session

Load Control

• Determines the number of processes that will be resident in main memory (i.e. multiprogramming level)– Too few processes:

often all processes will be blocked and the processor will be idle

– Too many processes: the resident size of each process will be too small and flurries of page faults will result: thrashing

Page 10: CS414 Review Session

Handling Interrupts and Traps

• Terminate current instruction (instructions)– Pipeline flush.

• Save state– Registers, PC, may need to repeat instructions.

• Invoke Interrupt Handling Routine– Interrupt vector table– User space to Kernel space context switch

• Execute the interrupt handling routine• Invoke the scheduler to schedule a ready process.

– Kernel space to user space context switch

Page 11: CS414 Review Session

Disk Optimizations

• Seek Time (biggest overhead)• Disk Scheduling Algorithms

– SSTF, SCAN, C-SCAN, LOOK, C-LOOK

• Contiguous file allocation– Place contiguous block on same cylinder– Same track, if not same numbered track on another disk.

• Organ Pipe Distribution– Place most used blocks (I-nodes, directory structure) closer to the

middle of the disk.– Place the head in the middle of the disk

• Use multiple heads.

Page 12: CS414 Review Session

Disk Optimizations• Rotational Latency (next biggest)

• Interleaving– Adjacent sectors are actually not adjacent on the disk.

• Disk Cache– Cache all sectors on the track. (2 rotations)

1

23

4

5

6

7

Page 13: CS414 Review Session

Redundant Array of Inexpensive Disks

• Mirroring or Shadowing– Expensive, small gain in read time, reliable

• Striping– Inexpensive, faster access time, not reliable

• Striping + Parity– Inexpensive, small performance gain, reliable

• Interleaving + Parity + Striping– Inexpensive, faster access time, reliable

Page 14: CS414 Review Session

Storage Hierarchy

Register

Level 1 Cache

Level 2 Cache

Main Memory

Hard Disk

Tertiary Storage

Network

nsec

100 nsec

usec

msec

sec

10-1000 usec

B

KB+

500 KB+

100 MB+

GB+

TB

??

Page 15: CS414 Review Session

Paging vs Segmentation

• Fixed size partitions

• Internal Fragmentation (average=page size/2)

• No External Fragmentation.

• Small chunk of memory. (~ 4 KB)

• Linear address space, invisible to programmer.

• Variable size partition

• No Internal Fragmentation

• External Fragmentation (compaction, page segs)

• Large chunk of memory. (~ 1 MB)

• Logical address space, visible to programmer.

Page 16: CS414 Review Session

Demand-paging vs Pre-paging

• Pages swapped in on demand.

• More page faults (especially initially).

• No wastage of page frames.

• No such overhead.

• Pages swapped in before use in anticipation.

• Reduce future page faults.

• Pages may not be used (wastage of memory space).

• Good strategies to pre-page. (working set, contiguous pages, etc…)

Page 17: CS414 Review Session

Local vs Global Page Replacement.

• Only swap out current process’ pages.

• Page frame allocation strategies required. (page fault frequency)

• Thrashing affects only current process.

• Admission control required.

• Can use different page replacement algorithms for each process.

• Swap out any page in memory.

• No explicit allocation of page frames.

• Can affect performance of other processes.

• Admission control required.

• Single page replacement algorithm.

Page 18: CS414 Review Session

Interrupt driven IO vs Polling

• Each interrupt has a fixed processing time overhead (context switches).

• Other processes can execute while waiting for response.

• Good for long and indefinite response time.

• Ex: Printer

• The response time on polling is variable. (device and request specific)

• No other process can execute while waiting for response.

• Good for short and predictable response time (< fixed interrupt overhead).

• Ex: Fast Networks

Page 19: CS414 Review Session

Contiguous vs Indexed Allocation• All blocks of the file in

contiguous disk locations.• No additional index overhead.

(Disk addresses can be computed)

• Disk fragmentation is a major problem. (compaction overhead)

• Smart allocation strategies required.

• Low average latency for sequential access. (only one long seek, smart block layouts)

• Blocks of the file randomly distributed throughout the disk.

• Each access involves a search in the index. (Involves fetching additional blocks from the disk)

• No Fragmentation on the disk.

• No allocation strategies required.

• High average latency (disk scheduling algorithms)

Page 20: CS414 Review Session

Contiguous vs Linked Allocation

• All blocks are in contiguous disk addresses.

• Disk addresses can be computed for each access.

• Suffers from fragmentation of disk.

• Bad sectors affect contiguity of blocks.

• Blocks are arranged in a link list fashion.

• Each access involves browsing the entire list.

• No disk fragmentation.

• All bad blocks can be hidden away as a file.

Page 21: CS414 Review Session

Hard Disks vs Tapes

• Small capacity (few GB)

• Subject to various failures (disk crashes, bad sectors, etc…)

• Random access latency is very small (msec)

• Huge capacity per unit volume (TB)

• Permanent storage (no corruption for long time.)

• Very high random access latency (sec) (need to read the tape from the beginning)

Page 22: CS414 Review Session

Unix FS vs Log FS

• Index used to map I-nodes to physical blocks.

• Same read latency as indexed allocation.

• Writes take place on the same block where data is read from.

• Write latency equals is dominated by seek time.

• No garbage collection required.

• Crash recovery is extremely difficult.

• Index used to map I-nodes to physical blocks.

• Same read latency as UNIX FS

• Writes are bunched together and done on sequential blocks.

• Write latency is small because of amortized seek time.

• Garbage collection required to free old blocks.

• Checkpoints enable efficient recovery from crashes.

Page 23: CS414 Review Session

Routing StrategiesFixed• Permanent path between A and B

• Congestion independent of paths.

• No set-up costs.

• Sequential delivery.

Virtual Circuit• Per session path between A and B

• Some attempt to uniform congestion.

• Per session set-up cost.

• Sequential delivery.

Dynamic• Different path per message between A and B

• Uniform congestion across paths.

• Per message set-up cost.

• Out of order delivery.

Page 24: CS414 Review Session

Connection StrategiesCircuit Switch• Permanent link between A and B(hardware)

• Congestion independent of paths.

• No set-up costs.

• Sequential delivery.

Message Switch• Per message link between A and B

• Some attempt to uniform congestion.

• Initial set-up cost.

• Sequential delivery.

Packet Switch• Different link per packet between A and B

• Uniform congestion across links. (best link)

• No set-up cost.

• Out of order delivery.