25
Virtual Memory Art Munson CS614 Presentation February 10, 2004

Virtual Memory Art Munson CS614 Presentation February 10, 2004

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Virtual Memory

Art Munson

CS614 Presentation

February 10, 2004

Page 2: Virtual Memory Art Munson CS614 Presentation February 10, 2004

VM Review

0 16 4

Physical Memory

Virtual Memory (seen by application)

Efficient Translation?

Page 3: Virtual Memory Art Munson CS614 Presentation February 10, 2004

VM Review (II)

0 16 4

Physical Memory

Virtual Memory (seen by application)

Virt. Page

Phys. Frame

0 0

1 3

2 -1

… …

6 Fr 2

1 Fr 3TLB

Page TableLookup page

Found ?No

Yes

Page 4: Virtual Memory Art Munson CS614 Presentation February 10, 2004

VM Pros/Cons

• More address space– Per program

– System-wide

• Isolation / Protection• Ease of Cleanup

• Complicates sharing memory

• Performance:– Context switches

(cache, TLB)

Motivation: multiple interactive programs on one computer

Page 5: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Mach Goals

• Portable: uniproc, multiproc, and distributed computers

• Support large, sparse virtual address space

• Copy-on-write as VM operation

• Efficient copy-on-write & read-write memory sharing

Page 6: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Mach Goals (cont’d)

• Memory mapped files

• Allow generic user-provided backing store objects (and pagers)

Page 7: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Mach Memory Architecture

CPU Phys. Memory pmap

Machine Dependent Code

RW/CR/C RW/N

X/S

Ref: 3 Ref: 1

Kernel

Address map

Memory objects

Pagers

??

Page 8: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Key Architecture Decisions

Memory Objects

• Data. Anywhere. Associated pager maps to address space(s).

Message Passing

• Loosely coupled, allowing generality.

• Simple semantics allows consistent API for distributed computing. Too simple?

Page 9: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Implementing Memory Sharing

• Shadow objects: proxies to support copy-on-write at page granularity.

• Shared mapping: provides coherency for read-write sharing

A B

C

Page 10: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Page Table Tradeoffs

Normal: due to large size (8MB), linear organization non-option

Pg 0 Fr 2

Pg 1 Fr 0

Pg 2 Disk

… …

Pg 0 Fr 2

Pg 1 Fr 0

Pg 2 Disk

… …

Pg 0-3

Pg 3-6

Inverted: query with hash function. Aliasing clumsy / expensive

Fr 0 Pg 1

Fr 1 empty

Fr 2 Pg 0

Fr 3 Pg 6

Page 11: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Mach VM Performance

Page 12: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Mach VM Performance (II)

Page 13: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Clever VM Hacks

Idea: use page fault hardware to check simple application predicates.

• OS provides hooks for application: TRAP, PROT1, PROTN, UNPROT, DIRTY, MAP2

• Implicit assumption that faulting page is in memory---i.e. app handles protection faults.

Page 14: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Pointer Forwarding

From-space

To-space

Page 15: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Pointer Forwarding (II)

From-space

To-space

Page 16: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Hack #1: Concurrent Garbage Collection

From-space

To-space

1. Initialize collecting:a. Stop mutator threads.b. PROTN collection region.c. MAP2 region so collector

can access (not shown).d. Copy roots & registers to

to-space.e. Restart mutator threads.f. Collecting thread starts

scanning from-space.

Page 17: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Hack #1: Concurrent Garbage Collection (II)

From-space

To-space

Page fault

void trap_fault(pg) { objList = scanPage(pg); foreach (o in objList) { copy o to to-space forward ptrs to o } UNPROT(pg); rerun faulting instr.}

Page 18: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Hack #2: Shared Virtual Memory

Page 19: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Hack #2: Shared Virtual Memory (II)

CPU 1

CPU 5

a B c d e f g h

i j k B m f w h

Give me up-to-date copy of B;Then invalidate the page.

Here you go.

CPU 1 now marks B as writeable and resumes execution.

CPU 1 faults when trying to write read-only B.

Page 20: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Hack #3: Concurrent Checkpointing

Goal: save application state (registers + memory).

Algorithm:• Mark all pages read-only (PROTN).• Background thread concurrently copies pages to checkpoint, then marks read/write.• On write-fault, checkpoint thread backs up faulting page and marks writeable.

Page 21: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Other VM Algorithms

• Generational garbage collection

• Persistent stores

• Extending address space in persistent stores

• Data compression paging

• Heap overflow detection

Page 22: Virtual Memory Art Munson CS614 Presentation February 10, 2004

VM Primitive Performance

Page 23: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Is this a good idea?

View 1:• Extending VM services provided by OS• Reduces instruction count => better perf*• Data properties determine program

execution. Mach model easy logical fit.

So OS designers should improve efficiency of VM primitives.

Page 24: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Is this a good idea? (II)

View 2:• Relies on knowledge of hardware, not an

abstraction.• Introducing more page faults might degrade

performance on current highly pipelined processors.

• Able to enforce not looking at CPU state?• OS: trust application code?

Page 25: Virtual Memory Art Munson CS614 Presentation February 10, 2004

Summary

• Rich set of memory abstractions enables VM for uniproc and multiproc computers.– Distributed computing tolerant of abstraction in

practice?

• Clever VM hacks may not adjust well to hardware changes.