Virtual Memory Simulation Theoremsbusserver.cs.uni-sb.de/lehre/vorlesung/info2/... · Virtual...

Preview:

Citation preview

Virtual MemorySimulation Theorems

Mark A. Hillebrand, Wolfgang J. Paul{mah,wjp}@cs.uni-sb.de

Saarland University, Saarbruecken, Germany

This workwas partiallysupported by

Virtual MemorySimulation Theorems – p.1

OverviewTheorem: the parallel user programs of a main framesee a sequentially consistent virtual shared memory(Correctness of main frame hardware & part of OS)

Virtual MemorySimulation Theorems – p.2

ContextA (practical) approach for the complete formalverification of real computer systems:

1. Specify (precisely)

2. Construct (completely)

3. Mathematical correctness proof

4. Check correctness proof by computer

5. Automate approach (partially; recall Gödel)

Virtual MemorySimulation Theorems – p.3

Example: Processor design1.–3. [MP00]

Computer Architecture:Complexity and CorrectnessSpringer

4. [BJKLP03]Functional verification of the VAMP processorCharme ’03

5. PhD Thesis Project S. Tverdyshev(Khabarovsk State Technical University)

Virtual MemorySimulation Theorems – p.4

Why Memory Management?Layers of computer systems (all using localcomputation and communication):

User Program

Operating System

Hardware

! In memory management hardware and softwareare coupled extremely tightly.

Virtual MemorySimulation Theorems – p.5

DLX ConfigurationA processor configuration of the DLX is a pairc = (R,M):

• R : {register names} → {0, 1}32

where register names PC ,GPR(r), status, . . .

• M : {memory addresses} → {0, 1}8

where memory addresses ∈ {0, 1}32

Standard definition is an abstraction:real hardware usually has no 232 bytes main memory

Virtual MemorySimulation Theorems – p.6

DLX V ConfigurationA virtual processor configuration of DLX V is a triplec = (R,M, r):

• R : {register names} → {0, 1}32

where register names: PC ,GPR(r), status, . . .

• M : {virtual memory addresses} → {0, 1}8

where virtual memory addresses ∈ {0, 1}32

• r : {virtual memory addresses} → {R, W}where the rights R (read) and W (write) areidentical for each page (4K).

! DLX V is a basis for user programs.

Virtual MemorySimulation Theorems – p.7

DLX S ConfigurationA real specification machine configuration of DLX S

is a triple cS = (RS,PM , SM ):• RS \ R:

• mode system mode (0) or user mode (1)• pto Page table origin• ptl Page table length (only for exceptions)

• PM physical memory• SM swap memory

! DLX S is hardware specification.

Virtual MemorySimulation Theorems – p.8

Page-Table Lookup

(pto,012) px bx

+

32Page Table 20

ppx

02

Let c = (RS,PM , SM ).• Virtual address

va = (px, bx)

px: page indexbx: byte index

• PTc(px) = PM 4(〈pto〉 + 4 · 〈px〉)

=31 1 0

vwrppx[19 : 0] · · ·

12 11 3 2

ppxc(va): physical page indexvc(va): valid bit (↔ page in PM)

Virtual MemorySimulation Theorems – p.9

Address Translation(pto,012) px bx

+

32Page Table 20

ppx

02

pmac(va)

20

12Let c = (RS,PM , SM ).

• Virtual addressva = (px, bx)

px: page indexbx: byte index

• pmac(va) = (ppxc(va), bx)pmac: physical memory address

• To access swap memory, we also define:smac: swap memory address (e.g. sbasec + va)

Virtual MemorySimulation Theorems – p.10

Instruction ExecutionDLX V uses virtual addresses:

• Fetch: va = DPC (delayed PC)• Effective address of load/store:

va = ea = GPR(RS1 ) + imm (register relative)

Hardware DLX S for mode = 1 (user):• If vc(va), use translated addresses pmac(va)

instead of va.• Otherwise, exception.

(hardware supplies parameters for page faulthandler)

Virtual MemorySimulation Theorems – p.11

Hardware Implementation

IMMU

DMMU

fetch

load,store

DCache

ICache

PMCPU

• Build 2 hardware boxes MMU (memorymanagement unit for fetch and load/store)between CPU and caches

• Show it translates• Done

Virtual MemorySimulation Theorems – p.12

Hardware Implementation

IMMU

DMMU

fetch

load,store

DCache

ICache

PMCPU

• Build 2 hardware boxes MMU (memorymanagement unit for fetch and load/store)between CPU and caches

• Show it translates• Done

No!

Virtual MemorySimulation Theorems – p.12

Hardware Implementation

IMMU

DMMU

fetch

load,store

DCache

ICache

PMCPU

• Build hardware boxes MMU & a few gates• Identify software conditions• Show MMU translates if software conditions are

met• Show software meets conditions• Almost done

We do not care about translation (purely technical),we care about a simulation theorem.

Virtual MemorySimulation Theorems – p.13

Simulating DLX V by DLX S

Let c = (RS,PM , SM ) and cV = (RV ,PM , r).Define a projection: cV = Π(c)

• Identical register contents: RV (r) = RS(r)

• Rights in page table:R ∈ r(va) ⇔ rc(va) = 1

W ∈ r(va) ⇔ wc(va) = 1

Virtual MemorySimulation Theorems – p.14

Simulating DLX V by DLX S

VM (va) =

{

PM (pmac(va) if vc(va)

SM (smac(va) otherwise

ppx

v

SM

bxpx

page(px)

PT (px)

PM

va

1

PM cache for virtual memory, PT is cache directory.Handlers (almost!) work accordingly (select victim,write back to SM, swap in from SM)

Virtual MemorySimulation Theorems – p.15

Simulating DLX V by DLX S

VM (va) =

{

PM (pmac(va) if vc(va)

SM (smac(va) otherwise

SM

bxpx

ppx

v

page(px)

PT (px)

PM

va

0

PM cache for virtual memory, PT is cache directory.Handlers (almost!) work accordingly (select victim,write back to SM, swap in from SM)

Virtual MemorySimulation Theorems – p.15

Software Conditions1. OS code and data structures (PT, sbase,

free space) maintained in system areaSys ⊆ PM

2. OS does not destroy its code & dataTablePage

...

...

sbase

PM

User

Sys

3. User program (UP) cannot modify Sys(impossibility of hacking)

4. Writes to code section are separated from reads incode section by sync or (syncing) rfe

Standard sync empties pipelined or OoO (out oforder) machine before next instruction is issued.

! Swap in code then user mode fetch= self modification of code by OS & UP

Virtual MemorySimulation Theorems – p.16

Guaranteeing Software Conditions1. Operating system construction

2. Operating system construction

3. No pages of Sys allocated via PT to UP

4. UP alone not self modifying, handlers end withrfe

Virtual MemorySimulation Theorems – p.17

Hardware ICPU – memory system – protocol

load / store

fetch

DCache

ICache

PMCPU

Cache Miss

DPC

I

addr

dout

DPC

mbusy

mr

Cache Hit

clk

I

Virtual MemorySimulation Theorems – p.18

Hardware IIInserting 2 MMUs:

IMMU

DMMU

fetch

load,store

DCache

ICache

PMCPU

Must obey memory protocol at both sides!

Virtual MemorySimulation Theorems – p.19

Primitive MMUPrimitive MMU controlled by finite state diagram (FSD)

c.addr[31:2]p.din[31:0]

arce

[31:2]

add

(r,w,v)

t

[31:12]

pte[31:0]

drce

c.din[31:0]

[11:0] [31:12]

[31:0]

(p.addr[31:2],0^2)

ptl[19:0] pto[19:0]

0^2 0^12

[31:0]lexcp

[2:0]

1 0

1 0

+<

ar[31:0]

dr[31:0]

idle

add:arce,addp.busy

p.req &p.t

seta:arce,p.busy

p.req &/p.t

lexcp

readpte:c.mr,drce

p.busy

/lexcp

c.busy

comppa:arce,p.busy

/c.busy

pteexcp

read:c.mr,drce

p.busy

/pteexcp &p.mr

write:c.mw,p.busy

/pteexcp &p.mw

/c.busy

c.busy

/c.busy

c.busy

p.mr p.mw

Virtual MemorySimulation Theorems – p.20

MMU CorrectnessLocal translation lemma:Let T and V denote the start and end of a translatedread request, no excp. Let t ∈ {T, . . . , V }.Hypothesis: the following 4 groups of inputs do notchange in cycles t (i.e. X t = XT ):G0 : va = p.addrt, p.rdt, p.wrt, p.reqt

G1 : ptot, ptlt,modet

G2 : PT t

G3 : PM t(pmat(va)

Claim: p.dinV = PM T (pmaT (va))Proof: plain hardware correctness

Virtual MemorySimulation Theorems – p.21

Guaranteeing Hypotheses Gi

G0 MMU keeps p.busy active during translation

G1 Extra gates: normal sync before issue notenough. If rfe or update to {mode, pto, ptl} inissue stage, stop translation of fetch of nextinstruction.

G3 User program cannot modify Sys . Precedingsystem code terminated (by sync)

G4 Fetch: correct by syncLoad: assumes non-pipelined, in-order memoryunit, extra arguments otherwise

Virtual MemorySimulation Theorems – p.22

Global Hardware Correctness (fetch)Define scheduling functions I(k, T ) = i: instructionIi is in stage k during cycle T iff (. . . )

• Similar to tag computation• Key concept for hardware verification in SB

Hypothesis: I(fetch, T ) = i, translation from T to V

Claim: IR.dinV = PM iS(pmai

S(DPC iS))

Formal proof: part of PhD thesis project of I. Dalinger(Khabarovsk State Technical University)

Virtual MemorySimulation Theorems – p.23

Virtual Memory Theorem (SW only!)Consider a computation of DLX S:

Conf.

Phase

Mode

Initialisation User Program Handler User Program Handler

0

c0 cα−1

0 1

1

c

0

c c

0 1

c

1

c

0

c

0

c

Initialisation: Π(cαS) = c0

V

Simulation Step Theorem:

! 2 page faults per instruction possible(fetch & load/store)

Virtual MemorySimulation Theorems – p.24

Virtual Memory Theorem IIAssume Π(ci

S) = cjV .

Define:

• Same cycle or first cycle after handler:

s1(i) =

{

i if ¬pff i ∧ ¬pfls i

min{j > i,modej} otherwise

• Cycle after pagefault-free user mode step:

s2(i) =

{

i + 1 if ¬pff i ∧ ¬pfls i

s1(s1(i)) + 1 otherwise

Claim: Π(cs2(i)S ) = c

j+1V

Virtual MemorySimulation Theorems – p.25

LivenessWe must maintain in Sys :MRSI (most recently swapped-in page)

Page fault handler must not evict page MRSI !

Formal proof: PhD thesis project of T. In der Rieden(Saarbrücken)

Virtual MemorySimulation Theorems – p.26

Translation Look-aside Buffers I1-level lookup: formally caches for PT-region of PM

• Consistency of 4 caches:ICache, DCache, ITLB, DTLB

• Simply invalidate TLBs at mode switch to 1,sufficient by sync conditions

Virtual MemorySimulation Theorems – p.27

Translation Look-aside Buffers IIMulti-level lookup: TLB is simplified cache hardware:

• Normal cache entry:v = 1 tag PM(tag,c ad)c ad

• TLB entry:v = 1 tag pmat(tag,c ad)c ad

t : time of last sync / rfeInvalidate at mode switch to 1

• No writeback or load of lines• Only ‘cache’ reads and writes of values pma(va)

by MMU

Formal verification trivial from verified cache

Virtual MemorySimulation Theorems – p.28

Multiuser with Sharing• Easy implementation and proof of protection

properties using right bits r(va) and w(va)

Virtual MemorySimulation Theorems – p.29

Main Frames I

Proc Proc Proc

Shared Memory

• PM: sequentially consistent shared memory (bycache coherence protocol)

• New software condition: before change of anypage table entry all processors sync

• Sync hardware: some AND trees and driver treesinterfaced with CPUConsidered alone almost completelymeaningless.

Virtual MemorySimulation Theorems – p.30

Main Frames IITheorem: user programs see sequentially consistentvirtual shared memoryProof: Phases OS - UPs OS UPsGlobal serial schedule:

• in each phase from sequential consistency ofphysical shared memory

• straight forward composition across phases• remaining arguments not changed!

Virtual MemorySimulation Theorems – p.31

Summary• Mathematical treatment of memory management• Intricate combination of hardware and software

considered• Formalization under way

Virtual MemorySimulation Theorems – p.32

Future WorkFormal verification of

• compilers,• operating systems,• applications,• communication systems

in industrial context. . .

Virtual MemorySimulation Theorems – p.33

Future WorkFormal verification of

• compilers,• operating systems,• applications,• communication systems

in industrial context. . .

. . . with a little help from my friends.

Virtual MemorySimulation Theorems – p.33

Recommended