12
page 1 CSC 343 – Operating Systems, Fall 2013, Assignment 3, due Monday November 25 This assignment is due by midnight on Monday November 25 via gmake turnitin as explained below. To get the starting code for the project please follow these steps after logging into acad/bill: cp ~parson/OpSys/stm3page.problem.zip OpSys/stm3page.problem.zip cd ./OpSys unzip stm3page.problem.zip cd ./stm3page ssh harry cd ./OpSys/stm3page All of your programming and testing must occur on multiprocessor “harry.” I have already generated the state machine diagrams for this assignment, see below. All state machines in this assignment have identical graphs. All work must occur within your OpSys/stm3page directory on harry. In this assignment I am supplying a FIFO page replacement algorithm in file rr_fifopage.stm. You can run gmake testrr_fifopage to test it. You must write state machine programs rr_lrupage.stm and rr_lrupage_dirty.stm as outlined below. When everything runs, perform gmake clean test one last time, then perform the manual inspection outlined next. Please complete survey.txt and then gmake turnitin by the due date deadline. Instructions for the manual inspection are to look at the following output from gmake clean test. TESTING COMPLETED grep 'MEAN_waiting_paging=' *.crunch rr_fifopage.crunch:MEAN_waiting_paging=36486.5978261 rr_lrupage.crunch:MEAN_waiting_paging=35319.0947368 rr_lrupage_dirty.crunch:MEAN_waiting_paging=30394.5405405 The rr_fifopage algorithm should show the highest mean time that threads wait for demand paging to complete its I/O (in the waiting_paging state), and rr_lrupage_dirty should show the lowest mean time for this parameter, with rr_lrupage in the middle for that measurement. I will inspect this ordering as part of grading. You can invoke gmake testrr_lrupage while testing rr_lrupage.stm, and gmake testrr_lrupage_dirty while testing rr_lrupage_dirty.stm. Make sure to invoke gmake clean test at the end and inspect the above manual output , and please complete survey.txt by replacing the asterisks with integer numbers . LRU is worth 50% of the project grade, and LRU-DIRTY is worth the other 50%. I will give partial credit for solutions with algorithm bugs, but they must be able to compile. Each test takes a minute or so to complete, so a successful test of LRU and LRU-DIRTY take about 2 to 2.5 minutes on harry. Below is the state machine graph for all three algorithms 1 , followed by a discussion of the overall approach, and then my rr_fifopage.stm example solution. I have annotated the state machine graph with some relevant PCB data structures used by all three programs. You need to understand and use them. 1 http://bill.kutztown.edu/faculty/parson/rr_fifopage.jpg

CSC 343 – Operating Systems, Fall 2013, …faculty.kutztown.edu/parson/fall2013/csc343fall2013assn3.pdfCSC 343 – Operating Systems, Fall 2013, Assignment 3, due Monday November

Embed Size (px)

Citation preview

page 1

CSC 343 – Operating Systems, Fall 2013, Assignment 3, due Monday November 25 This assignment is due by midnight on Monday November 25 via gmake turnitin as explained below. To get the starting code for the project please follow these steps after logging into acad/bill: cp ~parson/OpSys/stm3page.problem.zip OpSys/stm3page.problem.zip cd ./OpSys unzip stm3page.problem.zip cd ./stm3page ssh harry cd ./OpSys/stm3page All of your programming and testing must occur on multiprocessor “harry.” I have already generated the state machine diagrams for this assignment, see below. All state machines in this assignment have identical graphs. All work must occur within your OpSys/stm3page directory on harry. In this assignment I am supplying a FIFO page replacement algorithm in file rr_fifopage.stm. You can run gmake testrr_fifopage to test it. You must write state machine programs rr_lrupage.stm and rr_lrupage_dirty.stm as outlined below. When everything runs, perform gmake clean test one last time, then perform the manual inspection outlined next. Please complete survey.txt and then gmake turnitin by the due date deadline. Instructions for the manual inspection are to look at the following output from gmake clean test. TESTING COMPLETED grep 'MEAN_waiting_paging=' *.crunch rr_fifopage.crunch:MEAN_waiting_paging=36486.5978261 rr_lrupage.crunch:MEAN_waiting_paging=35319.0947368 rr_lrupage_dirty.crunch:MEAN_waiting_paging=30394.5405405 The rr_fifopage algorithm should show the highest mean time that threads wait for demand paging to complete its I/O (in the waiting_paging state), and rr_lrupage_dirty should show the lowest mean time for this parameter, with rr_lrupage in the middle for that measurement. I will inspect this ordering as part of grading. You can invoke gmake testrr_lrupage while testing rr_lrupage.stm, and gmake testrr_lrupage_dirty while testing rr_lrupage_dirty.stm. Make sure to invoke gmake clean test at the end and inspect the above manual output, and please complete survey.txt by replacing the asterisks with integer numbers. LRU is worth 50% of the project grade, and LRU-DIRTY is worth the other 50%. I will give partial credit for solutions with algorithm bugs, but they must be able to compile. Each test takes a minute or so to complete, so a successful test of LRU and LRU-DIRTY take about 2 to 2.5 minutes on harry. Below is the state machine graph for all three algorithms1, followed by a discussion of the overall approach, and then my rr_fifopage.stm example solution. I have annotated the state machine graph with some relevant PCB data structures used by all three programs. You need to understand and use them.                                                                                                                          1 http://bill.kutztown.edu/faculty/parson/rr_fifopage.jpg

page 2

page 3

Each of our page replacement schedulers uses an identical state machine graph with an identical set of states. You may or may not need to add transitions due to varying guard expressions and actions. I did not need to add transitions, but implementations may vary. All three of the algorithms proceed as follows. The simulation forks 10 processes with 5 threads in each. Thread 0 in each process manages demand paging in the lower left portion of the graph (in the Process Control Block or pcb a reference to this thread object resides in the pcb.pager field). Thread 0 is responsible for taking frameRequests from application threads 1 through 4 when the pcb.freefameQueue is empty, dequeuing a victim page from the pcb.victimQueue, flushing the victim page’s frame to disk only if the page is dirty (i.e., it was written by the process), reading in the requested page from disk, putting the new page into the requesting thread’s pcb.pagetable slot, and then signaling the thread at the front of the pcb.WaitingForFrameQueue with a frameReady event. Note that each element in pcb.WaiitingForFrameQueue is a (thread, page) ordered pair of values (a 2-tuple in Python) that holds the reference to the waiting thread and the page number for which the thread needs a physical page. When the pcb.freefameQueue is not empty, an application thread can take a frame when needed from there without making a request to pager thread 0. Application threads 1 through 4 operate in the right side of the state machine graph. The major change to that part of the state machine from Assignment 2 is that the former running state has become three states. State running_cpu simulates running using on CPU registers, running_memory simulates interacting with memory pages that are mapped to physical frames, and waiting_paging simulates waiting for a page’s frame to be read from disk as part of demand paging. Note that mean waiting_paging time for each thread is the main simulation measurement of interest. Better page replacement algorithms make this number smaller. We will go over the graph and its correlation to the code below in class on November 11. 1 # CSC 343, Fall 2013, STUDENT NAME: 2 # rr_fifopage.stm implements a preemptive round-robin CPU scheduler 3 # and a FIFO page replacement algorithm with per-process *proportional 4 # allocation* (in contrast to *equal allocation*) and *local frame 5 # replacement* (in contrast to *global frame replacement*) to keep 6 # the simulation somewhat simpler. We are comparing FIFO, LRU, and 7 # LRU with major priority for non-dirty pages within the framework of 8 # *proportional allocation* and *local frame replacement*. 9 10 machine processor { 11 # Use this machine in all of your files in assignment 3 to start processes. 12 # It starts 10 processes, one every tick. Each process starts life with 13 # thread 0, and that thread spawns 4 additional threads and initializes 14 # some variables. Thread 0 manages the frame recovery algorithm. 15 threadsToGo = 10 ; 16 start init, state makingThreads, accept doneMakingThreads ; 17 init -> makingThreads init()[]/@ 18 processor.readyq = Queue(ispriority=False); 19 threadsToGo -= 1 ; fork()@ 20 makingThreads -> makingThreads fork(pid, tid)[@threadsToGo > 0@]/@ 21 threadsToGo -= 1 ; fork()@ 22 makingThreads -> doneMakingThreads fork(pid, tid)[@threadsToGo == 0@]/ 23 } 24 25 # This assignment uses round-robin CPU scheduling with 50% IO-bound and 26 # 50% CPU-bound threads as in assignment 2. Also, 50% of the threads 27 # have high (good) locality for memory references, and half have low 28 # (poor) locality.

page 4

29 machine thread { 30 quantum = 125, machineid = -1, pid = -1, tid = -1, iobound = @False@, 31 islocal = @False@, endtime = 100000, addressBitsInPage = 10, 32 minFreeFrames = 1 ; 33 # islocal is True if the thread shows good locality of reference. 34 # 2^addressBitsInPage gives the number of addresses within a page, 35 # for example 2^10 = 1024 = 1K page size in this simulation. 36 start init, state postinit, state pcbinit, 37 state pagingWait, state pagingIn, state pagingOut, 38 state scheduling, state ready, 39 state running_cpu, state running_memory, 40 state waiting_paging, state waiting_io, 41 state rescheduling, accept terminated ; 42 # Initialize variables used by all threads: 43 init -> postinit init()[]/@machineid, pid, tid = getid(); 44 iobound = True if ((pid % 2) == 1) else False ; 45 # ^^^ The odd pids are IO bound. The others (50%) are CPU bound. 46 islocal = True if ((tid % 2) == 1) else False ; 47 # ^^^ The odd thread ids have the good locality. 48 pagesize = 1 << addressBitsInPage ; 49 pagemask = pagesize - 1 ; 50 pagingDisk = len(processor.fastio) - 1; 51 yieldcpu()@ 52 # START OF STATES LIMITED TO THREAD 0, WHICH IS pcb.pager 53 postinit -> pcbinit yieldcpu()[@tid == 0@]/@ 54 # Thread 0 sets up the POCESS CONTROL BLOCK (PCB), 55 # then spawns application threads, then manages page recovery 56 # in its own cycle of the state machine graph. 57 # All threads in a process share access to a single PCB. 58 pcb.pagecount = sample(10, 100, 'uniform'); 59 pcb.framecount = int(pcb.pagecount / 4); 60 # STUDENT: The entry that goes into each pagetable element 61 # is up to you. For rr_fifopage I am storing the following 62 # fields at each entry, where "entry" is a pagetable element: 63 # entry[0] gets a non-negative frame number, or -1 if this 64 # page has not yet been allocated a frame or is is paged out 65 # entry[1] starts as a 0 when the page is first read into a 66 # frame, is set to 1 whenever a thread writes into the page; 67 # it is the "dirty bit". It determines whether the frame 68 # must be written to disk when the frame is reused. 69 # entry[2] is the current handle into the pcb.victimQueue 70 # for this page entry (as returned by Queue.enq()), or None 71 # when the frame held by this page is not Queued as a 72 # candidate for recovery (reuse). None is Python's NULL. 73 # All of our pages are read/write; there are no permission bits. 74 # STUDENT may add other fields. 75 pcb.pagetable = [[-1, 0, None] 76 for p in range(0, pcb.pagecount)]; 77 # pcb.freeframeQueue Queue maintains the frames that are free for 78 # allocation without borrowing, it may be empty at times. 79 pcb.freeframeQueue = Queue(ispriority=False); 80 for f in range(0, pcb.framecount): pcb.freeframeQueue.enq(f); 81 # ^^^ Each entry is the frame number of a free frame. 82 pcb.victimQueue = Queue(ispriority=False); 83 # ^^^ victimQueue is the FIFO of in-use pages for FIFO page 84 # replacement

page 5

85 # Each entry is a page number, i.e., an index into pagetable. 86 pcb.waitingForFrameQueue = Queue(ispriority=False); 87 # ^^^ These are the (thread, page) pairs waiting for a frame. 88 pcb.pager = thread; 89 # ^^^ Other threads make frameRequests to this thread. 90 threadsToStart = 4; 91 yieldcpu()@ 92 pcbinit -> pcbinit yieldcpu()[@threadsToStart > 0@]/@ 93 threadsToStart -= 1; 94 spawn(); 95 yieldcpu()@ 96 pcbinit -> pagingWait yieldcpu()[@threadsToStart == 0@]/@ 97 waitForEvent('frameRequest', False) 98 if len(pcb.freeframeQueue) >= minFreeFrames 99 else trigger(0, 'frameRequest')@ 100 # If the threads that we just started have already exhausted 101 # the free frame pool, trigger to recover some free frames. 102 pagingWait -> pagingWait frameRequest()[@ 103 len(pcb.waitingForFrameQueue) > 0 104 and pcb.pagetable[ 105 pcb.waitingForFrameQueue.peek()[1]][0] >= 0@]/@ 106 # Waited-for page was paged in by an earlier waiting thread. 107 waitthread, waitpage = pcb.waitingForFrameQueue.deq(); 108 signalEvent(waitthread, 'frameReady'); 109 waitForEvent('frameRequest', False)@ 110 pagingWait -> pagingOut frameRequest()[@ 111 pcb.pagetable[pcb.victimQueue.peek()][1] != 0@]/@ 112 # The dirty flag is set, so this page must be flushed. 113 oldpage = pcb.victimQueue.deq(); 114 freeframe = pcb.pagetable[oldpage][0]; 115 # Marked old page table entry as "paged out." 116 pcb.pagetable[oldpage][0] = -1 ; 117 pcb.pagetable[oldpage][1] = 0 ; 118 pcb.pagetable[oldpage][2] = None ; 119 # ^^^ mark pagetable entry as unmapped, then flush frame to disk 120 # VVV io() call writes dirty page to pagingDisk. 121 io(pagingDisk)@ 122 pagingWait -> pagingIn frameRequest()[@ 123 pcb.pagetable[pcb.victimQueue.peek()][1] == 0@]/@ 124 # Dirty flag not set, read in frame for blocked thread. 125 oldpage = pcb.victimQueue.deq(); 126 freeframe = pcb.pagetable[oldpage][0]; 127 # Marked old page table entry as "paged out." 128 pcb.pagetable[oldpage][0] = -1 ; 129 pcb.pagetable[oldpage][1] = 0 ; 130 pcb.pagetable[oldpage][2] = None ; 131 # ^^^ mark pagetable entry as unmapped. 132 # VVV io() call reads demanded page from pagingDisk. 133 io(pagingDisk)@ 134 pagingOut -> pagingIn io()[]/@ 135 # Writing is done, read in frame for blocked thread. 136 # VVV io() call reads demanded page from pagingDisk. 137 io(pagingDisk)@ 138 pagingIn -> pagingWait io()[@len(pcb.waitingForFrameQueue) > 0@]/@ 139 # Requested page is now in, update data & resume wait. 140 # There is a thread waiting for this page.

page 6

141 waitthread, waitpage = pcb.waitingForFrameQueue.deq(); 142 pcb.pagetable[waitpage][0] = freeframe ; 143 pcb.pagetable[waitpage][1] = 0 ; 144 pcb.pagetable[waitpage][2] = None ; 145 # The receiving thread updates pcb.victimQueue 146 # and pcb.pagetable[waitpage][2] after a successful 147 # pagingIn. 148 signalEvent(waitthread, 'frameReady'); 149 waitForEvent('frameRequest', False) 150 if len(pcb.freeframeQueue) >= minFreeFrames 151 else trigger(0, 'frameRequest')@ 152 # If the frame we just allocated has exhausted the 153 # pool, trigger to recover some free frames. 154 pagingIn -> pagingWait io()[@len(pcb.waitingForFrameQueue) == 0@]/@ 155 # Requested page is now in, update data & resume wait. 156 # There is NO thread waiting for this page. 157 pcb.freeframeQueue.enq(freeframe); 158 waitForEvent('frameRequest', False) 159 if len(pcb.freeframeQueue) >= minFreeFrames 160 else trigger(0, 'frameRequest')@ 161 # END OF STATES LIMITED TO THREAD 0, WHICH IS pcb.pager 162 postinit -> scheduling yieldcpu()[@tid != 0@]/@ 163 ticks = sample(1, 250, 'exponential', 25) if iobound 164 else sample(100, 1100, 'revexponential', 1000); 165 tickstorun = min(ticks, quantum); 166 tickstodefer = ticks - tickstorun; 167 yieldcpu()@ 168 scheduling -> terminated yieldcpu()[@time() >= endtime@]/ 169 scheduling -> ready yieldcpu()[@processor.contextsFree == 0@]/@ 170 # Put myself in processor's readyq with rr priority. 171 processor.readyq.enq(thread); waitForEvent('context', False)@ 172 ready -> scheduling context()[]/@yieldcpu()@ 173 # ^^^ Do not set ticks; they have not all been used. 174 scheduling -> running_cpu yieldcpu()[@processor.contextsFree > 0@]/@ 175 processor.contextsFree -= 1 ; 176 ticksUntilMemory = sample(1, tickstorun, 'uniform') 177 if (tickstorun > 1) else tickstorun ; 178 tickstorun = tickstorun - ticksUntilMemory; 179 iswrite = sample(0, 1, 'uniform'); 180 virtualMemorySize = pcb.pagecount * pagesize ; 181 memoryLocation = sample(0, virtualMemorySize - 1, 182 'gaussian', virtualMemorySize / 2, pagesize * 2) 183 if islocal else 184 sample(0, virtualMemorySize - 1, 'uniform'); 185 # Don't let every thread's mem refs pile up together: 186 memoryLocation = (memoryLocation + tid * pagesize * 2) 187 % virtualMemorySize; 188 pageNumber = memoryLocation >> addressBitsInPage ; 189 cpu(ticksUntilMemory)@ 190 running_cpu -> running_memory cpu()[]/@ 191 ticksForMemoryInstruction = min(tickstorun, 4); 192 # It is possible for an uninterruptible instruction 193 # to over-run the quantum. Use 4 ticks for memory. 194 tickstorun -= (ticksForMemoryInstruction 195 if (tickstorun >= ticksForMemoryInstruction) else 0); 196 cpu(4)@

page 7

197 running_memory -> running_cpu cpu()[@ 198 pcb.pagetable[pageNumber][0] > -1 and tickstorun > 0@]/@ 199 pcb.pagetable[pageNumber][1] 200 = 1 if iswrite else pcb.pagetable[pageNumber][1]; 201 # ^^^ Mark dirty bit if this is a write. 202 cpu(0)@ 203 running_memory -> waiting_paging cpu()[@ 204 pcb.pagetable[pageNumber][0] < 0 205 and len(pcb.freeframeQueue) < 1@]/@ 206 # Page is not mapped to frame, request then wait. 207 pcb.waitingForFrameQueue.enq((thread, pageNumber)); 208 signalEvent(pcb.pager, 'frameRequest'); 209 processor.contextsFree += 1 ; 210 signalEvent(processor.readyq.deq(), 'context') 211 if len(processor.readyq) > 0 else noop(); 212 waitForEvent('frameReady', False)@ 213 running_memory -> running_memory cpu()[@ 214 pcb.pagetable[pageNumber][0] < 0 215 and len(pcb.freeframeQueue) > 0@]/@ 216 # Page is not mapped to frame, take a free frame. 217 pcb.pagetable[pageNumber][0] = pcb.freeframeQueue.deq(); 218 pcb.pagetable[pageNumber][1] = 1 if iswrite else 0 ; 219 pcb.pagetable[pageNumber][2] = pcb.victimQueue.enq(pageNumber); 220 cpu(0)@ 221 waiting_paging -> scheduling frameReady()[]/@ 222 # Thread 0 has put the frame into pcb.pagetable[pageNumber][0] 223 pcb.pagetable[pageNumber][1] = 1 if iswrite else 0 ; 224 pcb.pagetable[pageNumber][2] = pcb.victimQueue.enq(pageNumber); 225 # Reschedule tickstorun leftover after paging IO. 226 tickstodefer += tickstorun; 227 tickstorun = min(tickstodefer, quantum); 228 tickstodefer = tickstodefer - tickstorun; 229 yieldcpu()@ 230 running_memory -> scheduling cpu()[@ 231 pcb.pagetable[pageNumber][0] > -1 and tickstorun < 1 232 and tickstodefer > 0@]/@ 233 pcb.pagetable[pageNumber][1] 234 = 1 if iswrite else pcb.pagetable[pageNumber][1]; 235 # ^^^ Mark dirty bit if this is a write. 236 processor.contextsFree += 1 ; 237 signalEvent(processor.readyq.deq(), 'context') 238 if len(processor.readyq) > 0 else noop(); 239 tickstorun = min(tickstodefer, quantum); 240 tickstodefer = tickstodefer - tickstorun; 241 yieldcpu()@ 242 running_memory -> rescheduling cpu()[@ 243 pcb.pagetable[pageNumber][0] > -1 and tickstorun < 1 244 and tickstodefer < 1@]/@ 245 pcb.pagetable[pageNumber][1] 246 = 1 if iswrite else pcb.pagetable[pageNumber][1]; 247 # ^^^ Mark dirty bit if this is a write. 248 processor.contextsFree += 1 ; 249 signalEvent(processor.readyq.deq(), 'context') 250 if len(processor.readyq) > 0 else noop(); 251 yieldcpu()@ 252 rescheduling -> terminated yieldcpu()[@time() >= endtime@]/

page 8

253 rescheduling -> waiting_io yieldcpu()[]/@ 254 # Pick an iodevice of -1 (process terminal) or one of 255 # the fastio devices. Save the last fastio for paging. 256 iodevice = sample(-1, len(processor.fastio)-2, 'uniform'); 257 io(iodevice)@ 258 waiting_io -> scheduling io()[]/@ 259 ticks = sample(1, 250, 'exponential', 25) if iobound 260 else sample(100, 1100, 'revexponential', 1000); 261 tickstorun = min(ticks, quantum); 262 tickstodefer = ticks - tickstorun; 263 yieldcpu()@ 264 } 265 266 processor As seen in lines 63 through 72 of the above listing, each entry in the pcb.pagetable has these three fields. 63 # entry[0] gets a non-negative frame number, or -1 if this 64 # page has not yet been allocated a frame or is is paged out 65 # entry[1] starts as a 0 when the page is first read into a 66 # frame, is set to 1 whenever a thread writes into the page; 67 # it is the "dirty bit". It determines whether the frame 68 # must be written to disk when the frame is reused. 69 # entry[2] is the current handle into the pcb.victimQueue 70 # for this page entry (as returned by Queue.enq()), or None 71 # when the frame held by this page is not Queued as a 72 # candidate for recovery (reuse). None is Python's NULL. Each entry in pcb.pagetable contains these fields.

Frame Dirty bit Location in pcb.victimQueue

-1 when not mapped, or >= 0 gives the frame number in physical memory for the frame mapped to this page.

0 if no thread has written data into the frame mapped to this page (one or more threads may have read data), or 1 (dirty) if at least one thread has written to the frame. The pager (thread 0) must write dirty pages to disk when re-mapping a frame from one page to another.

None (Python’s equivalent of NULL) when a page is not mapped, otherwise the value returned by pcb.victimQueue.enq() on the most recent call to insert or re-insert a mapped page into this queue.

I did not have to add any fields to pcb.pagetable for LRU or LRU-DIRTY. The pcb.victimQueue is one of the main data structures of interest in converting FIFO page replacement to LRU and LRU-DIRTY. It holds the pages that have been mapped in an order determined by the page replacement algorithm. FIFO replacement enqueues a page in this queue in FIFO order only at the time that the page is mapped to a frame, hence “pcb.victimQueue = Queue(ispriority=False);” in line 82 above. Setting “ispriority=True” makes a Queue into a min-priority-queue as in assignment 2, requiring a second, min-priority argument for the Queue.enq() operation. Note that I have implemented the CPU scheduling readyq as one of these Queue library objects.

page 9

TO IMPLEMENT LRU and LRU-DIRTY your main steps will be to change the order in which victim pages dequeue from pcb.victimQueue, and to modify the places in your state machine actions that update the contents of pcb.victimQueue. LRU deletes and then re-inserts a page’s “Location in pcb.victimQueue” value in the above table every time that an application thread reads or writes that page’s frame. LRU-DIRTY does likewise, basing the order of pages within pcb.victimQueue primarily on the dirty bit – clean pages should sort to the front of the pcb.victimQueue – and secondarily on the memory access order, with the LRU pages sorting towards the front and MRU (most recently used) pages sorting towards the back of pcb.victimQueue. See comments for operation Queue.enq() below. In the context of the previous paragraph, “an application thread reads or writes that page’s frame” every time you see this action in the state machine: pcb.pagetable[pageNumber][1] = 1 if iswrite else pcb.pagetable[pageNumber][1]; That action actually updates the dirty bit in the pagetable “Dirty bit” entry. In this simulation that action represents a read or write to the frame mapped to the page, i.e. a memory “use” in the sense of LRU (least recently used). Note that this action appears in several transitions. Each one of them represents an interaction with the mapped frame by an application thread via a logical memory page. Note also that if you enquire a page number into pcb.victimQueue as a min-priority-queue, the priority value can be either a scalar number (as it was for Shortest Job First) OR a tuple of scalar values such as: (number1, number2) OR (number1 number2, number 3) ETC. In using such a tuple of scalar values, the min-priority-Queue treats number1 as the primary sort value, comparing number2 fields only when the number1 fields are equal for the entries being sorted in the priority Queue. Note also that Queue.enq() returns a so-called opaque handle that serves as a reference to an item in the Queue. Queue. delete() allows you to remove an item from the middle of a Queue, after which you can re-insert the item’s value (e.g., page number) with a new priority at the back for the Queue. FIFO (non-priority) Queues also support deleting an entry and then reinserting it at the back of the Queue. You must determine whether Queue deletion and re-insertion are necessary for LRU and LRU-DIRTY. Help on class Queue in module CSC343Sim: class Queue(__builtin__.object) | Queue can be either a FIFO queue or a min-priority queue implemented | using library package heapq. Its FIFO versus min-priority queue is | set when it is constructed. | | Methods defined here: | | __init__(ispriority) | Construct a Queue, as a FIFO Queue if parameter ispriority is | False, else construct a priority Queue using a min | heap, i.e., where the enqueued entry with the smallest priority | value has the highest priority. | | __len__() | Returns the number of elements in this queue, >= 0. |

page 10

| delete(handle) | Delete a previously enqueued value from the queue, where handle | is a value previously returned by enq. This method throws an | exception if the handle is no longer in this queue. | This method applies to a FIFO or priority queue. | RETURN value is the old, enqueued object value. | | deq() | deq() : obj dequeues and returns the obj at the front of the Queue. | | enq(obj, priority=None) | enq(obj, priority) enqueues the object pointed to by | obj into this Queue, where priority must be None (default) for a | FIFO queue, and must be any non-None value that can | be compared in sort order to other priority values. | The priority is typically a number, but it can be a tuple | for numbers, for example, in the order: | (mostSignificantKey, ... , leastSignificantKey). | This is a min-queue, so use the "-" minus sign if | necessary when a lrage value is the higher priority. | RETURN value is an opaque handle that a client can pass | to reprioritize() to change this obj's priority or to | delete() to delete this entry from the queue. | | len() | Returns the number of elements in this queue, >= 0. | | peek() | peek() : obj, returns the obj at the front of the queue without | removing it from the queue. | | reprioritize(handle, priority) | Change the priority of a previously enqueued value to the | priority parameter's value, where handle is a value previously | returned by enq. This method throws an exception if | this is a FIFO queue, or if the handle is no longer in this queue, | or if priority is invalid according to the documentation for enq(). | RETURN value is the new handle; the previous one is no | longer valid after this call. | Help on module rr_fifopage: class processor(state2codeV3.CSC343Sim._Processor_) | | fork() | *fork() : (processID, threadID): starts a new process and its initial | thread and terminal, storing the pid in child[] and {pid : PCB} | map, where PCB is a reference to a PCB object. | | idle(ticks) | *idle(ticks): delays the _Processor_ *ticks* ticks, useful for fork()ing | processes at temporal intervals in the simulation. | | log(file, loglevel, tag=None) |

page 11

| msg(message) | Log text message from user code to log file. | | time() | time() : int returns the current, global simulation time in ticks. | | trigger(ticks, event, *args) | *trigger(ticks, event, *args) : (args ...) stalls the _Processor_ | for *ticks* ticks, and then delivers the named *event" to the | _Processor_ object with the *args* tuple as arguments. | | ---------------------------------------------------------------------- class thread(state2codeV3.CSC343Sim._Thread_) | | cpu(ticks) | *cpu(ticks) requests ticks on a _Processor_ context. | | fork() | *fork() : (pid, threadid) starts the initial thread and terminal of a | new child process, storing the pid in child[] and | {pid : PCB} in processor.pcb, where PCB is a reference to | a PCB object. | | getid() | getid() : (machineid, processid, threadid) returns the | (_Processor_.number, processNumber, threadNumber) of this thread. | | io(index) | *io(index) requests I/O on fastio[index] or on terminal for index==-1. | | log(file, loglevel, tag=None) | | msg(message) | Log text message from user code to log file. | | retire() | *retire() terminates this thread. | TODO: retire() partly done, does not work with join() or wait(). | | sleep(ticks) | *sleep(ticks): delays the _Thread_ *ticks* ticks. | | spawn() | *spawn() : threadid starts another thread of the current process, | updating this process' PCB. | | time() | time() : int returns the current, global simulation time in ticks. | | trigger(ticks, event, *args) | *trigger(ticks, event, (args...)) : (args ...) stalls the _Thread_ | for *ticks* ticks, and then delivers the named *event" to the | _Thread_ object with the *args* tuple as arguments. | | yieldcpu()

page 12

| *yieldcpu delays the _Thread_ 0 ticks, giving another thread | a chance to run in the simulation. | | ---------------------------------------------------------------------- | Methods inherited from state2codeV3.CSC343Sim.__scheduledObject__: | | sample(lower, upper, distType, *parameters) | Return an integer in the inclusive range [lower, upper] where | lower and upper are integers, and distType and parameters covary | as follows. | distType = 'uniform' gives a uniform distribution in the range | [lower, upper] with parameters ignored. | distType = 'gaussian' gives a gaussian distribution in the range | [lower, upper] with parameters (mu, sigma) where mu is | an int or float withing the range [lower, upper] that is the | distribution mean, and sigma is the standard deviation. Results | outside the range are discarded until a valid value | is found and returned. | distType = 'exponential' gives a exponential distribution in the range | [lower, upper] with parameter (mu,) where mu is | an int or float withing the range [lower, upper] that is | the mean; the closer it is to lower, the steeper the dropoff. | Results outside the range are discarded until a valid value | is found and returned. | distType = 'revexponential' gives a reverse exponential distribution | in the range [lower, upper] that grows towards upper, with | parameter (mu,) where mu is an int or float withing the range | [lower, upper] that is the mean; the closer it is to upper, | the steeper the rise. | Results outside the range are discarded until a valid value | is found and returned. | | waitForEvent(eventType, isexclusive) | *waitForEvent(eventType, isexclusive) is the mechanism for waiting for an | eventType delivered by another thread in the simulation, which is delivered | by another thread calling waitingobj.signalEvent(eventType), with the | waitingobj simobj as the object parameter. Parameter isexclusive, if True, means | respond only to that eventType; when False, respond to the arrival of any event. | | ---------------------------------------------------------------------- | Static methods inherited from state2codeV3.CSC343Sim.__scheduledObject__: | noop() | noop() does nothing and returns None, it is useful as a | default else in a conditional expression when no else value | is needed. | | signalEvent(waitingobj, eventType, result=None) | signalEvent(waitingobj, eventType, result) signals the waitingobj waiting within | waitForEvent with the eventType and result value. See waitForEvent.