Upload
others
View
16
Download
0
Embed Size (px)
Citation preview
18-447
Computer Architecture
Lecture 8: Pipelining
Prof. Onur Mutlu
Carnegie Mellon University
Spring 2013, 2/4/2013
Reminder: Homework 2
Homework 2 out
Due February 11 (next Monday)
LC-3b microcode
ISA concepts, ISA vs. microarchitecture, microcoded machines
2
Reminder: Lab Assignment 2
Lab Assignment 1.5
Verilog practice
Not to be turned in
Lab Assignment 2
Due Feb 15
Single-cycle MIPS implementation in Verilog
All labs are individual assignments
No collaboration; please respect the honor code
3
Lookahead: Extra Credit for Lab Assignment 2
Complete your normal (single-cycle) implementation first, and get it checked off in lab.
Then, implement the MIPS core using a microcoded approach similar to what we will discuss in class.
We are not specifying any particular details of the microcode format or the microarchitecture; you can be creative.
For the extra credit, the microcoded implementation should execute the same programs that your ordinary implementation does, and you should demo it by the normal lab deadline.
You will get maximum 4% of course grade
Document what you have done and demonstrate well
4
Readings for Today
Pipelining
P&H Chapter 4.5-4.8
Optional: Hamacher et al. book, Chapter 6, “Pipelining”
Pipelined LC-3b Microarchitecture
http://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?media=18447-lc3b-pipelining.pdf
5
Today’s Agenda
Finish microprogrammed microarchitectures
Start pipelining
6
Review: Last Lecture
An exercise in microprogramming
7
Review: An Exercise in
Microprogramming
8
Handouts
7 pages of Microprogrammed LC-3b design
http://www.ece.cmu.edu/~ece447/s13/doku.php?id=manuals
http://www.ece.cmu.edu/~ece447/s13/lib/exe/fetch.php?media=lc3b-figures.pdf
9
A Simple LC-3b Control and Datapath
10
C.2. THE STATE MACHINE 5
R
PC<! BaseR
To 18
12
To 18
To 18
RR
To 18
To 18
To 18
MDR<! SR[7:0]
MDR <! M
IR <! MDR
R
DR<! SR1+OP2*set CC
DR<! SR1&OP2*set CC
[BEN]
PC<! MDR
32
1
5
0
0
1
To 18
To 18To 18
R R
[IR[15:12]]
28
30
R7<! PCMDR<! M[MAR]
set CC
BEN<! IR[11] & N + IR[10] & Z + IR[9] & P
9
DR<! SR1 XOR OP2*
4
22
To 11
1011
JSR
JMP
BR
1010
To 10
21
20
0 1
LDB
MAR<! B+off6
set CC
To 18
MAR<! B+off6
DR<! MDRset CC
To 18
MDR<! M[MAR]
25
27
3762
STW STBLEASHF
TRAP
XOR
AND
ADD
RTI
To 8
set CC
set CCDR<! PC+LSHF(off9, 1)
14
LDW
MAR<! B+LSHF(off6,1) MAR<! B+LSHF(off6,1)
PC<! PC+LSHF(off9,1)
33
35
DR<! SHF(SR,A,D,amt4)
NOTESB+off6 : Base + SEXT[offset6]
R
MDR<! M[MAR[15:1]’0]
DR<! SEXT[BYTE.DATA]
R
29
31
18, 19
MDR<! SR
To 18
R R
M[MAR]<! MDR
16
23
R R
17
To 19
24
M[MAR]<! MDR**
MAR<! LSHF(ZEXT[IR[7:0]],1)
15To 18
PC+off9 : PC + SEXT[offset9]
MAR <! PCPC <! PC + 2
*OP2 may be SR2 or SEXT[imm5]
** [15:8] or [7:0] depending on
MAR[0]
[IR[11]]
PC<! BaseR
PC<! PC+LSHF(off11,1)
R7<! PC
R7<! PC
13
Figure C.2: A state machine for the LC-3b
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
IRD
Address of Next State
6
6
0,0,IR[15:12]
J[5]
Branch ReadyModeAddr.
J[0]J[1]J[2]
COND0COND1
J[3]J[4]
R IR[11]BEN
Figure C.5: The microsequencer of the LC-3b base machine
unused opcodes, the microarchitecture would execute a sequence of microinstructions,
starting at state 10 or state 11, depending on which illegal opcode was being decoded.
In both cases, the sequence of microinstructions would respond to the fact that an
instruction with an illegal opcode had been fetched.
Several signals necessary to control the data path and the microsequencer are not
among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6
shows the additional logic needed to generate DR, SR1, and BEN.
The remaining signal, R, is a signal generated by the memory in order to allow theState 18 (010010) State 33 (100001) State 35 (100011) State 32 (100000) State 6 (000110) State 25 (011001) State 27 (011011)
State Machine for LDW Microsequencer
C.4. THE CONTROL STRUCTURE 11
DR
IR[11:9]
111
DRMUX
(a)
SR1
SR1MUX
IR[11:9]
IR[8:6]
(b)
Logic BEN
PZN
IR[11:9]
(c)
Figure C.6: Additional logic required to provide control signals
LC-3b to operate correctly with a memory that takes multiple clock cycles to read or
store a value.
Suppose it takes memory five cycles to read a value. That is, once MAR contains
the address to be read and the microinstruction asserts READ, it will take five cycles
before the contents of the specified location in memory are available to be loaded into
MDR. (Note that the microinstruction asserts READ by means of three control signals:
MIO.EN/YES, R.W/RD, and DATA.SIZE/WORD; see Figure C.3.)
Recall our discussion in Section C.2 of the function of state 33, which accesses
an instruction from memory during the fetch phase of each instruction cycle. For the
LC-3b to operate correctly, state 33 must execute five times before moving on to state
35. That is, until MDR contains valid data from the memory location specified by the
contents of MAR, we want state 33 to continue to re-execute. After five clock cycles,
the memory has completed the “read,” resulting in valid data in MDR, so the processor
can move on to state 35. What if the microarchitecture did not wait for the memory to
complete the read operation before moving on to state 35? Since the contents of MDR
would still be garbage, the microarchitecture would put garbage into IR in state 35.
The ready signal (R) enables the memory read to execute correctly. Since the mem-
ory knows it needs five clock cycles to complete the read, it asserts a ready signal
(R) throughout the fifth clock cycle. Figure C.2 shows that the next state is 33 (i.e.,
100001) if the memory read will not complete in the current clock cycle and state 35
(i.e., 100011) if it will. As we have seen, it is the job of the microsequencer (Figure
C.5) to produce the next state address.
C.4. THE CONTROL STRUCTURE 9
Microinstruction
R
Microsequencer
BEN
x2
Control Store
6
IR[15:11]
6
(J, COND, IRD)
269
35
35
Figure C.4: The control structure of a microprogrammed implementation, overall block
diagram
on the LC-3b instruction being executed during the current instruction cycle. This state
carries out the DECODE phase of the instruction cycle. If the IRD control signal in the
microinstruction corresponding to state 32 is 1, the output MUX of the microsequencer
(Figure C.5) will take its source from the six bits formed by 00 concatenated with the
four opcode bits IR[15:12]. Since IR[15:12] specifies the opcode of the current LC-
3b instruction being processed, the next address of the control store will be one of 16
addresses, corresponding to the 14 opcodes plus the two unused opcodes, IR[15:12] =
1010 and 1011. That is, each of the 16 next states is the first state to be carried out
after the instruction has been decoded in state 32. For example, if the instruction being
processed is ADD, the address of the next state is state 1, whose microinstruction is
stored at location 000001. Recall that IR[15:12] for ADD is 0001.
If, somehow, the instruction inadvertently contained IR[15:12] = 1010 or 1011, the
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
IRD
Address of Next State
6
6
0,0,IR[15:12]
J[5]
Branch ReadyModeAddr.
J[0]J[1]J[2]
COND0COND1
J[3]J[4]
R IR[11]BEN
Figure C.5: The microsequencer of the LC-3b base machine
unused opcodes, the microarchitecture would execute a sequence of microinstructions,
starting at state 10 or state 11, depending on which illegal opcode was being decoded.
In both cases, the sequence of microinstructions would respond to the fact that an
instruction with an illegal opcode had been fetched.
Several signals necessary to control the data path and the microsequencer are not
among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6
shows the additional logic needed to generate DR, SR1, and BEN.
The remaining signal, R, is a signal generated by the memory in order to allow the
14APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
J LD.P
C
LD.B
EN
LD.IR
LD.M
DR
LD.M
AR
LD.R
EGLD
.CC
Con
d
IRD
Gat
ePC
Gat
eMD
RG
ateA
LUG
ateM
AR
MU
X
Gat
eSH
FPC
MU
XD
RM
UX
SR1M
UX
AD
DR
1MU
XA
DD
R2M
UX
MA
RM
UX
010000 (State 16)
010001 (State 17)
010011 (State 19)
010010 (State 18)
010100 (State 20)
010101 (State 21)
010110 (State 22)
010111 (State 23)
011000 (State 24)
011001 (State 25)
011010 (State 26)
011011 (State 27)
011100 (State 28)
011101 (State 29)
011110 (State 30)
011111 (State 31)
100000 (State 32)
100001 (State 33)
100010 (State 34)
100011 (State 35)
100100 (State 36)
100101 (State 37)
100110 (State 38)
100111 (State 39)
101000 (State 40)
101001 (State 41)
101010 (State 42)
101011 (State 43)
101100 (State 44)
101101 (State 45)
101110 (State 46)
101111 (State 47)
110000 (State 48)
110001 (State 49)
110010 (State 50)
110011 (State 51)
110100 (State 52)
110101 (State 53)
110110 (State 54)
110111 (State 55)
111000 (State 56)
111001 (State 57)
111010 (State 58)
111011 (State 59)
111100 (State 60)
111101 (State 61)
111110 (State 62)
111111 (State 63)
001000 (State 8)
001001 (State 9)
001010 (State 10)
001011 (State 11)
001100 (State 12)
001101 (State 13)
001110 (State 14)
001111 (State 15)
000000 (State 0)
000001 (State 1)
000010 (State 2)
000011 (State 3)
000100 (State 4)
000101 (State 5)
000110 (State 6)
000111 (State 7)
ALU
K
MIO
.EN
R.W
LSH
F1
DA
TA.S
IZE
Figure C.7: Specification of the control store
10APPENDIX C. THE MICROARCHITECTURE OF THE LC-3B, BASIC MACHINE
IRD
Address of Next State
6
6
0,0,IR[15:12]
J[5]
Branch ReadyModeAddr.
J[0]J[1]J[2]
COND0COND1
J[3]J[4]
R IR[11]BEN
Figure C.5: The microsequencer of the LC-3b base machine
unused opcodes, the microarchitecture would execute a sequence of microinstructions,
starting at state 10 or state 11, depending on which illegal opcode was being decoded.
In both cases, the sequence of microinstructions would respond to the fact that an
instruction with an illegal opcode had been fetched.
Several signals necessary to control the data path and the microsequencer are not
among those listed in Tables C.1 and C.2. They are DR, SR1, BEN, and R. Figure C.6
shows the additional logic needed to generate DR, SR1, and BEN.
The remaining signal, R, is a signal generated by the memory in order to allow the
Review: End of the Exercise in
Microprogramming
20
The Microsequencer: Some Questions
When is the IRD signal asserted?
What happens if an illegal instruction is decoded?
What are condition (COND) bits for?
How is variable latency memory handled?
How do you do the state encoding?
Minimize number of state variables
Start with the 16-way branch
Then determine constraint tables and states dependent on COND
21
The Control Store: Some Questions
What control signals can be stored in the control store?
vs.
What control signals have to be generated in hardwired logic?
i.e., what signal cannot be available without processing in the datapath?
22
Variable-Latency Memory
The ready signal (R) enables memory read/write to execute correctly
Example: transition from state 33 to state 35 is controlled by the R bit asserted by memory when memory data is available
Could we have done this in a single-cycle microarchitecture?
23
The Microsequencer: Advanced Questions
What happens if the machine is interrupted?
What if an instruction generates an exception?
How can you implement a complex instruction using this control structure?
Think REP MOVS
24
The Power of Abstraction
The concept of a control store of microinstructions enables the hardware designer with a new abstraction: microprogramming
The designer can translate any desired operation to a sequence microinstructions
All the designer needs to provide is
The sequence of microinstructions needed to implement the desired operation
The ability for the control logic to correctly sequence through the microinstructions
Any additional datapath control signals needed (no need if the operation can be “translated” into existing control signals)
25
Let’s Do Some More Microprogramming
Implement REP MOVS in the LC-3b microarchitecture
What changes, if any, do you make to the
state machine?
datapath?
control store?
microsequencer?
Show all changes and microinstructions
Coming up in Homework 3
26
Aside: Alignment Correction in Memory
Remember unaligned accesses
LC-3b has byte load and byte store instructions that move data not aligned at the word-address boundary
Convenience to the programmer/compiler
How does the hardware ensure this works correctly?
Take a look at state 29 for LDB
States 24 and 17 for STB
Additional logic to handle unaligned accesses
27
Aside: Memory Mapped I/O
Address control logic determines whether the specified address of LDx and STx are to memory or I/O devices
Correspondingly enables memory or I/O devices and sets up muxes
Another instance where the final control signals (e.g., MEM.EN or INMUX/2) cannot be stored in the control store
Dependent on address
28
Advantages of Microprogrammed Control
Allows a very simple datapath to do powerful computation by controlling the datapath (using a sequencer) High-level ISA translated into microcode (sequence of microinstructions)
Microcode enables a minimal datapath to emulate an ISA
Microinstructions can be thought of a user-invisible ISA
Enables easy extensibility of the ISA Can support a new instruction by changing the ucode
Can support complex instructions as a sequence of simple microinstructions
If I can sequence an arbitrary instruction then I can sequence an arbitrary “program” as a microprogram sequence will need some new state (e.g. loop counters) in the microcode for sequencing
more elaborate programs
29
Update of Machine Behavior
The ability to update/patch microcode in the field (after a processor is shipped) enables
Ability to add new instructions without changing the processor!
Ability to “fix” buggy hardware implementations
Examples
IBM 370 Model 145: microcode stored in main memory, can be updated after a reboot
IBM System z: Similar to 370/145.
Heller and Farrell, “Millicode in an IBM zSeries processor,” IBM JR&D, May/Jul 2004.
B1700 microcode can be updated while the processor is running
User-microprogrammable machine!
30
An Example Microcoded Multi-Cycle MIPS Design
P&H, Appendix D
Any ISA can be implemented this way
We will not cover this in class
However, you can do the extra credit assignment for Lab 2
Partial credit even if your full design does not work
Maximum 4% of your grade in the course
31
A Microprogrammed MIPS
Design: Lab 2 Extra Credit
32
Microcoded Multi-Cycle MIPS Design
33 [Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Control Logic for MIPS FSM
34 [Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Microprogrammed Control for MIPS FSM
35 [Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
End: A Microprogrammed MIPS
Design: Lab 2 Extra Credit
36
Horizontal Microcode
37
M ic ro p ro g ra m c o u n te r
A d d re s s se le c t lo g ic
A d d e r
1
In pu t
D a ta p a th
c o n tro l
o u tp u ts
M ic ro c od e
sto ra g e
In p u ts f rom in s tru c t io n
reg is te r o p c o d e f ie ld
O u tp u ts
S eq u en c in g
co n tro l
[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
ALUSrcA
IorD
IRWrite
PCWrite
PCWriteCond
….
n-bit mPC input
k-b
it “
con
tro
l” o
utp
ut
Control Store: 2n k bit (not including sequencing)
Vertical Microcode
38
M ic ro p ro g ra m c o u n te r
A d d re s s se le c t lo g ic
A d d e r
1
In pu t
D a ta p a th
c o n tro l
o u tp u ts
M ic ro c od e
sto ra g e
In p u ts f rom in s tru c t io n
reg is te r o p c o d e f ie ld
O u tp u ts
S eq u en c in g
co n tro l
“PC PC+4”
“PC ALUOut”
“PC PC[ 31:28 ],IR[ 25:0 ],2’b00”
“IR MEM[ PC ]”
“A RF[ IR[ 25:21 ] ]”
“B RF[ IR[ 20:16 ] ]”
…………. …….
ROM
ALU
SrcA
IorD
IRW
rite
PC
Write
PC
WriteC
on
d
….
[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
If done right (i.e., m<<n, and m<<k), two ROMs together
(2nm+2mk bit) should be smaller than horizontal microcode ROM (2nk bit)
m-bit input
k-bit output
n-bit mPC input
1-bit signal means do this RT (or combination of RTs)
Nanocode and Millicode
Nanocode: a level below traditional mcode
mprogrammed control for sub-systems (e.g., a complicated floating-point module) that acts as a slave in a mcontrolled datapath
Millicode: a level above traditional mcode
ISA-level subroutines that can be called by the mcontroller to handle complicated operations and system functions
E.g., Heller and Farrell, “Millicode in an IBM zSeries processor,” IBM JR&D, May/Jul 2004.
In both cases, we avoid complicating the main mcontroller
You can think of these as “microcode” at different levels of abstraction
39
Nanocode Concept Illustrated
40
ROM
mPC
arithmetic datapath
a “mcoded” FPU implementation
ROM
mPC
processor datapath
a “mcoded” processor implementation
We refer to this as “nanocode” when a mcoded subsystem is embedded in a mcoded system
Multi-Cycle vs. Single-Cycle uArch
Advantages
Disadvantages
You should be very familiar with this right now
41
Microprogrammed vs. Hardwired Control
Advantages
Disadvantages
You should be very familiar with this right now
42
Can We Do Better?
What limitations do you see with the multi-cycle design?
Limited concurrency
Some hardware resources are idle during different phases of instruction processing cycle
“Fetch” logic is idle when an instruction is being “decoded” or “executed”
Most of the datapath is idle when a memory access is happening
43
Can We Use the Idle Hardware to Improve Concurrency?
Goal: Concurrency throughput (more “work” completed
in one cycle)
Idea: When an instruction is using some resources in its processing phase, process other instructions on idle resources not needed by that instruction
E.g., when an instruction is being decoded, fetch the next instruction
E.g., when an instruction is being executed, decode another instruction
E.g., when an instruction is accessing data memory (ld/st), execute the next instruction
E.g., when an instruction is writing its result into the register file, access data memory for the next instruction
44
Pipelining: Basic Idea
More systematically:
Pipeline the execution of multiple instructions
Analogy: “Assembly line processing” of instructions
Idea:
Divide the instruction processing cycle into distinct “stages” of processing
Ensure there are enough hardware resources to process one instruction in each stage
Process a different instruction in each stage
Instructions consecutive in program order are processed in consecutive stages
Benefit: Increases instruction processing throughput (1/CPI)
Downside: Start thinking about this… 45
Example: Execution of Four Independent ADDs
Multi-cycle: 4 cycles per instruction
Pipelined: 4 cycles per 4 instructions (steady state)
46
Time
F D E W
F D E W
F D E W
F D E W
F D E W
F D E W
F D E W
F D E W
Time
The Laundry Analogy
“place one dirty load of clothes in the washer”
“when the washer is finished, place the wet load in the dryer”
“when the dryer is finished, take out the dry load and fold”
“when folding is finished, ask your roommate (??) to put the clothes away”
47
- steps to do a load are sequentially dependent - no dependence between different loads - different steps do not share resources
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Pipelining Multiple Loads of Laundry
48
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
- latency per load is the same - throughput increased by 4
- 4 loads of laundry in parallel - no additional resources
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Pipelining Multiple Loads of Laundry: In Practice
49
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
the slowest step decides throughput
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Pipelining Multiple Loads of Laundry: In Practice
50
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
A
B
A
B
Throughput restored (2 loads per hour) using 2 dryers
T im e76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T im e
76 P M 8 9 1 0 11 1 2 1 2 A M
A
B
C
D
T a sk
o rd e r
T a sk
o rd e r
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
An Ideal Pipeline
Goal: Increase throughput with little increase in cost (hardware cost, in case of instruction processing)
Repetition of identical operations
The same operation is repeated on a large number of different inputs
Repetition of independent operations
No dependencies between repeated operations
Uniformly partitionable suboperations
Processing can be evenly divided into uniform-latency suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry
What about the instruction processing “cycle”?
51
Ideal Pipelining
52
combinational logic (F,D,E,M,W) T psec
BW=~(1/T)
BW=~(2/T) T/2 ps (F,D,E) T/2 ps (M,W)
BW=~(3/T) T/3 ps (F,D)
T/3 ps (E,M)
T/3 ps (M,W)
More Realistic Pipeline: Throughput
Nonpipelined version with delay T
BW = 1/(T+S) where S = latch delay
k-stage pipelined version
BWk-stage = 1 / (T/k +S )
BWmax = 1 / (1 gate delay + S )
53
T ps
T/k ps
T/k ps
More Realistic Pipeline: Cost
Nonpipelined version with combinational cost G
Cost = G+L where L = latch cost
k-stage pipelined version
Costk-stage = G + Lk
54
G gates
G/k G/k
Pipelining Instruction Processing
55
Remember: The Instruction Processing Cycle
Fetch
Decode
Evaluate Address
Fetch Operands
Execute
Store Result
56
1. Instruction fetch (IF) 2. Instruction decode and register operand fetch (ID/RF) 3. Execute/Evaluate memory address (EX/AG) 4. Memory operand fetch (MEM) 5. Store/writeback result (WB)
Remember the Single-Cycle Uarch
57
S h ift
le ft 2
P C
In s truc tio n
m e m ory
R e a d
ad d re ss
In struc tio n
[3 1– 0]
D ata
m e m o ry
R e a d
d ata
W rite
d ata
R eg is te rs
W rite
re gis te r
W rite
da ta
R e a d
da ta 1
R e a d
da ta 2
R ea d
re gis te r 1
R ea d
re gis te r 2
Ins truc tio n [1 5– 1 1]
Ins truc tio n [2 0– 1 6]
Ins truc tio n [2 5– 2 1]
A d d
A L U
re su lt
Z e ro
In stru ction [5– 0 ]
M e m to R eg
A L U O p
M e m W rite
R e g W rite
M e m R e a d
B ra n ch
Ju m p
R e g D st
A L U S rc
Ins truc tio n [3 1– 2 6]
4
M
u
x
Ins tru ctio n [2 5– 0 ] Ju m p a d dres s [31– 0]
P C +4 [31– 2 8 ]
S ig n
e xte n d
1 6 3 2Ins truc tio n [1 5– 0 ]
1
M
u
x
1
0
M
u
x
0
1
M
u
x
0
1
A L U
co n tro l
C o ntro l
A d dA LU
re su lt
M
u
x
0
1 0
A L U
S h ift
left 226 2 8
A d d re ss
PCSrc2=Br Taken
PCSrc1=Jump
ALU operation
bcond
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
T BW=~(1/T)
Dividing Into Stages
58
200ps
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S hif t
le ft 2
Ins tru ctio n
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da taAd dre ss
D a ta
m e m o ry
1
A LU
res u ltM
u
x
A LU
Ze ro
IF : Ins tru c tio n fe tch ID : Ins tru c tio n d e co de /
reg is te r file rea d
E X : E x e cu te /
a dd re ss c a lcu la tion
M E M : M e m ory a cc e ss W B : W rite ba c k
Is this the correct partitioning? Why not 4 or 6 stages? Why not different boundaries?
100ps 200ps 200ps 100ps
RF write
ignore for now
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Instruction Pipeline Throughput
59
In s tru c tio n
fe tchR e g A L U
D a ta
a c c e ssR e g
8 n sIn s tru c tio n
fe tchR e g A L U
D a ta
a c c e ssR e g
8 n sIn s tru c tio n
fe tch
8 ns
T im e
lw $ 1 , 10 0 ($0 )
lw $ 2 , 20 0 ($0 )
lw $ 3 , 30 0 ($0 )
2 4 6 8 1 0 1 2 14 16 1 8
2 4 6 8 1 0 1 2 14
...
P rog ram
e x ec u t ion
o rd e r
(in in s truc tio ns )
In s tru c tio n
fe tc hR e g A L U
D a ta
a cc e s sR e g
T im e
lw $1 , 1 00 ($ 0 )
lw $2 , 2 00 ($ 0 )
lw $3 , 3 00 ($ 0 )
2 nsIn s tru c tio n
fe tc hR e g A L U
D a ta
a cc e s sR e g
2 nsIn s tru c tio n
fe tc hR e g A L U
D a ta
a cc e s sR e g
2 ns 2 n s 2 n s 2 ns 2 n s
P rog ram
e x ec u t io n
o rd e r
( in in s truc tio n s)
200 400 600 800 1000 1200 1400 1600 1800
200 400 600 800 1000 1200 1400
800ps
800ps
800ps
200ps 200ps 200ps 200ps 200ps
200ps
200ps
5-stage speedup is 4, not 5 as predicated by the ideal model. Why?
Enabling Pipelined Processing: Pipeline Registers
60 T
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S hif t
le ft 2
Ins tru ctio n
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da taAd dre ss
D a ta
m e m o ry
1
A LU
res u ltM
u
x
A LU
Ze ro
IF : Ins tru c tio n fe tch ID : Ins tru c tio n d e co de /
reg is te r file rea d
E X : E x e cu te /
a dd re ss c a lcu la tion
M E M : M e m ory a cc e ss W B : W rite ba c k
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res ult
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X/M E M M EM /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
Re ad
d ata 1
Re ad
d ata 2
R ea d
re gis ter 1
R ea d
re gis ter 2
16S ign
e xte nd
W rite
re gis ter
W rite
d ata
R ead
da ta
1
A LU
res ultM
u
x
A LU
Ze ro
ID /EX
D ata
m e m o ry
Ad dre ss
No resource is used by more than 1 stage! IR
D
PC
F
PC
D+4
PC
E+4
nP
CM
AE
BE
Imm
E
Ao
ut M
B
M
MD
RW
A
ou
t W
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
T/k ps
T/k ps
Pipelined Operation Example
61
In struct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res ult
S h if t
left 2
Ins
tru
cti
on
IF /ID E X /M E M M E M /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re gis ter 1
R ea d
re gis ter 2
16S ig n
e xte nd
W rite
re gis ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
Ins tru c tio n fe tc h
lw
A ddress
D ata
m em ory
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M
M
u
x
0
1
A dd
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
data 1
R e ad
data 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res ultM
u
x
A LU
Ze ro
ID /E X M EM /W B
Ins truc tio n dec ode
lw
Ad dre ss
D ata
m e m o ry
In struct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res ult
S h if t
left 2
Ins
tru
cti
on
IF /ID E X /M E M M E M /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re gis ter 1
R ea d
re gis ter 2
16S ig n
e xte nd
W rite
re gis ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
Ins tru c tio n fe tc h
lw
A ddress
D ata
m em ory
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M
M
u
x
0
1
A dd
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
data 1
R e ad
data 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res ultM
u
x
A LU
Ze ro
ID /E X M EM /W B
Ins truc tio n dec ode
lw
Ad dre ss
D ata
m e m o ry
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X M E M /W B
E xe cu tion
lw
Ad dre ss
D ata
m em ory
In stru ction
m em ory
A dd res s
4
32
0
A d dA dd
resu lt
S h ift
le ft 2
Ins
tru
cti
on
I F /ID E X /M E M
M
u
x
0
1
Ad d
P C
0W rite
d ata
M
u
x
1
R egiste rs
R ea d
d ata 1
R ea d
d ata 2
R e ad
re g is ter 1
R e ad
re g is ter 2
16S ig n
e xte nd
W rite
re g is ter
W rite
da ta
R e ad
d ataD a ta
m em o ry
1
A LU
res ultM
u
x
A L U
Zero
ID /E X M EM /W B
M e m ory
lw
A d dre ss
In stru ction
m em or y
A dd res s
4
32
0
A d dA dd
resu lt
S hift
le ft 2
Ins
tru
cti
on
I F /ID E X /M E M
M
u
x
0
1
Ad d
P C
0W rite
d ata
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ead
re gis ter 1
R ead
re gis ter 2
16S ig n
e xte nd
W rite
da ta
R e ad
dataD a ta
m e m o ry
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /EX M EM /W B
W rite ba c k
lw
W rite
re gis terA d dre ss
9 7 1 0 8 /P a tte rs o n
F ig u re 0 6 .1 5
In stru ction
m em ory
A dd res s
4
32
0
A d dA dd
resu lt
S h ift
le ft 2
Ins
tru
cti
on
I F /ID E X /M E M
M
u
x
0
1
Ad d
P C
0W rite
d ata
M
u
x
1
R egiste rs
R ea d
d ata 1
R ea d
d ata 2
R e ad
re g is ter 1
R e ad
re g is ter 2
16S ig n
e xte nd
W rite
re g is ter
W rite
da ta
R e ad
d ataD a ta
m em o ry
1
A LU
res ultM
u
x
A L U
Zero
ID /E X M EM /W B
M e m ory
lw
A d dre ss
In stru ction
m em or y
A dd res s
4
32
0
A d dA dd
resu lt
S hift
le ft 2
Ins
tru
cti
on
I F /ID E X /M E M
M
u
x
0
1
Ad d
P C
0W rite
d ata
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ead
re gis ter 1
R ead
re gis ter 2
16S ig n
e xte nd
W rite
da ta
R e ad
dataD a ta
m e m o ry
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /EX M EM /W B
W rite ba c k
lw
W rite
re gis terA d dre ss
9 7 1 0 8 /P a tte rs o n
F ig u re 0 6 .1 5
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res ult
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X/M E M M EM /W B
M
u
x
0
1
Ad d
PC
0
A ddress
W ri te
data
M
u
x
1
R eg iste rs
Re ad
d ata 1
Re ad
d ata 2
R ea d
re gis ter 1
R ea d
re gis ter 2
16S ign
e xte nd
W rite
re gis ter
W rite
d ata
R ead
da ta
D a ta
m e m o ry
1
A LU
res ultM
u
x
A LU
Ze ro
ID /EX
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
All instruction classes must follow the same path and timing through the pipeline stages. Any performance impact?
Pipelined Operation Example
62
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M M E M /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
In s tru c tio n de co d e
lw $ 1 0 , 20 ($1 )
In s tru c tio n fe tc h
s u b $ 1 1 , $ 2 , $ 3
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M M E M /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
In s tru c tio n fe tc h
lw $ 1 0 , 2 0 ($ 1 )
A dd res s
D a ta
m e m o ry
A dd res s
D a ta
m e m o ry
C lo ck 1
C lo ck 2
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M M E M /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
In s tru c tio n de co d e
lw $ 1 0 , 20 ($1 )
In s tru c tio n fe tc h
s u b $ 1 1 , $ 2 , $ 3
In str uct ion
m e m o ry
Add re ss
4
32
0
A ddA dd
res u lt
S h if t
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M M E M /W B
M
u
x
0
1
Ad d
PC
0W ri te
data
M
u
x
1
R eg iste rs
R e ad
d ata 1
R e ad
d ata 2
R ea d
re g is ter 1
R ea d
re g is ter 2
16S ign
e xte nd
W rite
re g is ter
W rite
d ata
R ead
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
In s tru c tio n fe tc h
lw $ 1 0 , 2 0 ($ 1 )
A dd res s
D a ta
m e m o ry
A dd res s
D a ta
m e m o ry
C lo ck 1
C lo ck 2
In str uc tion
m e m o ry
Ad dre ss
4
0
A ddA dd
res u lt
S h i ft
le ft 2
Ins
tru
cti
on
IF /ID EX /M E M M E M /W B
M
u
x
0
1
A dd
PC
0W r ite
data
M
u
x
1
R eg isters
R e ad
data 1
R e ad
data 2
R ea d
re g ister 1
R ea d
re g ister 2
3216S ign
e xte nd
W rite
re g ister
W rite
d ata
M em ory
lw $ 10 , 2 0 ($1 )
R ea d
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
E xec u tion
su b $11 , $ 2 , $ 3
In str uc tion
m e m o ry
Ad dre ss
4
0
A ddA dd
res u lt
S h i ft
le ft 2
Ins
tru
cti
on
IF /ID EX /M E M M E M /W B
M
u
x
0
1
A dd
PC
0W r ite
data
M
u
x
1
R eg isters
R e ad
data 1
R e ad
data 2
R ea d
re g ister 1
R ea d
re g ister 2
W rite
re g ister
W rite
d ata
R ea d
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
E xec u tion
lw $10 , 20 ($1 )
Ins truc tion d ecod e
su b $ 11 , $2 , $3
3216S ign
e xte nd
A dd res s
D a ta
m em ory
D ata
m em ory
Ad dre ss
C lo ck 3
C lo ck 4
In str uc tion
m e m o ry
Ad dre ss
4
0
A ddA dd
res u lt
S h i ft
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M M EM /W B
M
u
x
0
1
A dd
PC
0W r ite
data
M
u
x
1
R eg isters
R e ad
data 1
R e ad
data 2
R ea d
re g iste r 1
R ea d
re g iste r 2
3216S ign
e xte nd
W rite
re g iste r
W rite
d ata
M em ory
lw $ 10 , 2 0 ($1 )
R ea d
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
E xec u tion
su b $11 , $ 2 , $ 3
In str uc tion
m e m o ry
Ad dre ss
4
0
A ddA dd
res u lt
S h i ft
le ft 2
Ins
tru
cti
on
IF /ID E X /M E M M EM /W B
M
u
x
0
1
A dd
PC
0W r ite
data
M
u
x
1
R eg isters
R e ad
data 1
R e ad
data 2
R ea d
re g iste r 1
R ea d
re g iste r 2
W rite
re g iste r
W rite
d ata
R ea d
da ta
1
A LU
res u ltM
u
x
A LU
Ze ro
ID /E X
E xec u tion
lw $10 , 20 ($1 )
Ins truc tion d ecod e
su b $ 11 , $2 , $3
3216S ign
e xte nd
A dd res s
D a ta
m em ory
D ata
m em ory
A d dre ss
C lo ck 3
C lo ck 4
Ins truc tio n
m em ory
Ad dre ss
4
3 2
0
A ddAd d
re su l t
1
A L U
re su lt
Z ero
Sh ift
le f t 2
Ins
tru
cti
on
IF /ID EX /M EMID /E X M E M /W B
W rite ba c kM
u
x
0
1
A dd
PC
0W rite
da ta
M
u
x
1
R e gis ters
R ead
da ta 1
R ead
da ta 2
R ea d
reg iste r 1
R ea d
reg iste r 2
1 6S ign
ex ten d
M
u
x
A LUR ea d
da taW ri te
reg iste r
W ri te
data
lw $ 1 0 , 2 0 ($ 1 )
Ins truc tio n
m em ory
Ad dre ss
4
3 2
0
A ddAd d
re su l t
1
A L U
re su lt
Z ero
Sh ift
le f t 2
Ins
tru
cti
on
IF /ID EX /M EMID /E X M E M /W B
W rite ba c kM
u
x
0
1
A dd
PC
0W rite
da ta
M
u
x
1
R e gis ters
R ead
da ta 1
R ead
da ta 2
R ea d
reg iste r 1
R ea d
reg iste r 2
1 6S ign
ex ten d
M
u
x
A LUR ea d
da taW ri te
reg iste r
W ri te
data
sub $ 11 , $2 , $ 3
M e m o ry
su b $ 11 , $2 , $ 3
A dd res s
D a ta
m e m o ry
A dd res s
D a ta
m e m o ry
C lo ck 6
C lo ck 5
Ins truc tio n
m em ory
Ad dre ss
4
3 2
0
A ddAd d
re su l t
1
AL U
re su lt
Z ero
Sh ift
le f t 2
Ins
tru
cti
on
IF /ID EX /M EMID /E X M E M /W B
W rite b a c kM
u
x
0
1
A dd
PC
0W rite
da ta
M
u
x
1
R e gis ters
R ead
da ta 1
R ead
da ta 2
R ea d
reg iste r 1
R ea d
reg iste r 2
1 6S ign
ex ten d
M
u
x
ALUR ea d
da taW ri te
reg iste r
W ri te
data
lw $ 1 0 , 2 0 ($ 1 )
Ins truc tio n
m em ory
Ad dre ss
4
3 2
0
A ddAd d
re su l t
1
AL U
re su lt
Z ero
Sh ift
le f t 2
Ins
tru
cti
on
IF /ID EX /M EMID /E X M E M /W B
W rite b a c kM
u
x
0
1
A dd
PC
0W rite
da ta
M
u
x
1
R e gis ters
R ead
da ta 1
R ead
da ta 2
R ea d
reg iste r 1
R ea d
reg iste r 2
1 6S ign
ex ten d
M
u
x
ALUR ea d
da taW ri te
reg iste r
W ri te
data
sub $1 1 , $2 , $ 3
M em ory
su b $ 11 , $ 2 , $ 3
A dd res s
D a ta
m e m o ry
A dd res s
D a ta
m e m o ry
C lo ck 6
C lo ck 5
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Illustrating Pipeline Operation: Operation View
63
MEM
EX
ID
IF Inst4
WB
IF
MEM
IF
MEM
EX
t0 t1 t2 t3 t4 t5
ID
EX IF ID
IF ID
Inst0 ID
IF Inst1
EX
ID
IF Inst2
MEM
EX
ID
IF Inst3
WB
WB MEM
EX
WB
Illustrating Pipeline Operation: Resource View
64
I0
I0
I1
I0
I1
I2
I0
I1
I2
I3
I0
I1
I2
I3
I4
I1
I2
I3
I4
I5
I2
I3
I4
I5
I6
I3
I4
I5
I6
I7
I4
I5
I6
I7
I8
I5
I6
I7
I8
I9
I6
I7
I8
I9
I10
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
IF
ID
EX
MEM
WB
Control Points in a Pipeline
65
PC
Ins truc tion
m em ory
A dd ress
Ins
tru
cti
on
In s truc tion
[20– 16 ]
M em toR eg
A LU O p
B ranch
R e gD s t
A LU Src
4
16 32
Ins truc tion
[15– 0 ]
0
0R eg is te rs
W rite
re g is te r
W rite
da ta
R ea d
d a ta 1
R ea d
d a ta 2
R ead
re g is te r 1
R ead
re g is te r 2
S ign
ex tend
M
u
x
1
W rite
da ta
R ead
da ta M
u
x
1
A L U
c on tro l
R e gW rite
M em R ead
Ins truc tion
[15– 11 ]
6
IF /ID ID /E X E X/M E M M E M /W B
M em W rite
A ddress
D a ta
m em ory
P C S rc
Z ero
A ddA d d
res ult
S h ift
le ft 2
A L U
re su lt
A L U
Z e ro
A dd
0
1
M
u
x
0
1
M
u
x
Identical set of control points as the single-cycle datapath!!
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Control Signals in a Pipeline
For a given instruction
same control signals as single-cycle, but
control signals required at different cycles, depending on stage
decode once using the same logic as single-cycle and buffer control signals until consumed
or carry relevant “instruction word/field” down the pipeline and decode locally within each stage (still same logic)
Which one is better?
66
C on tro l
E X
M
W B
M
W B
W B
IF /ID ID /E X E X /M E M M E M /W B
In struc tio n
Pipelined Control Signals
67
P C
In stru c tio n
m e m ory
Ins
tru
cti
on
A d d
In stru ctio n
[2 0– 1 6]
Me
mto
Re
g
A L U O p
B ran ch
R e g D st
A L U S rc
4
16 32In stru ction
[1 5– 0 ]
0
0
M
u
x
0
1
A ddA d d
re su lt
R e g is te rs
W rite
re g is ter
W rite
d a ta
R ea d
d a ta 1
R ea d
d a ta 2
R e ad
re g is ter 1
R e ad
re g is ter 2
S ig n
ex ten d
M
u
x
1
A LU
re su lt
Z e ro
W rite
d ata
R e a d
d ataM
u
x
1
A LU
co n trol
S h ift
le ft 2
Re
gW
rite
M e m R e ad
C o n trol
A L U
In stru ctio n
[1 5– 1 1]
6
E X
M
W B
M
W B
W BIF /ID
P C S rc
ID /E X
E X /M E M
M E M /W B
M
u
x
0
1
Me
mW
rite
A dd re ss
D a ta
m e m o ry
A d d re ss
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
An Ideal Pipeline
Goal: Increase throughput with little increase in cost (hardware cost, in case of instruction processing)
Repetition of identical operations
The same operation is repeated on a large number of different inputs
Repetition of independent operations
No dependencies between repeated operations
Uniformly partitionable suboperations
Processing an be evenly divided into uniform-latency suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry
What about the instruction processing “cycle”?
68
Instruction Pipeline: Not An Ideal Pipeline Identical operations ... NOT!
different instructions do not need all stages
- Forcing different instructions to go through the same multi-function pipe
external fragmentation (some pipe stages idle for some instructions)
Uniform suboperations ... NOT!
difficult to balance the different pipeline stages
- Not all pipeline stages do the same amount of work
internal fragmentation (some pipe stages are too-fast but take the same clock cycle time)
Independent operations ... NOT!
instructions are not independent of each other - Need to detect and resolve inter-instruction dependencies to ensure the
pipeline operates correctly
Pipeline is not always moving (it stalls) 69
Issues in Pipeline Design
Balancing work in pipeline stages
How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the presence of events that disrupt pipeline flow
Handling dependences
Data
Control
Handling resource contention
Handling long-latency (multi-cycle) operations
Handling exceptions, interrupts
Advanced: Improving pipeline throughput
Minimizing stalls
70
Causes of Pipeline Stalls
Resource contention
Dependences (between instructions)
Data
Control
Long-latency (multi-cycle) operations
71
Dependences and Their Types
Also called “dependency” or less desirably “hazard”
Dependencies dictate ordering requirements between instructions
Two types
Data dependence
Control dependence
Resource contention is sometimes called resource dependence
However, this is not fundamental to (dictated by) program semantics, so we will treat it separately
72
Handling Resource Contention
Happens when instructions in two pipeline stages need the same resource
Solution 1: Eliminate the cause of contention
Duplicate the resource or increase its throughput
E.g., use separate instruction and data memories (caches)
E.g., use multiple ports for memory structures
Solution 2: Detect the resource contention and stall one of the contending stages
Which stage do you stall?
Example: What if you had a single read and write port for the register file?
73
Data Dependences
Types of data dependences
Flow dependence (true data dependence – read after write)
Output dependence (write after write)
Anti dependence (write after read)
Which ones cause stalls in a pipelined machine?
For all of them, we need to ensure semantics of the program are correct
Flow dependences always need to be obeyed because they constitute true dependence on a value
Anti and output dependences exist due to limited number of architectural registers
They are dependence on a name, not a value
We will later see what we can do about them
74
Data Dependence Types
75
Flow dependence r3 r1 op r2 Read-after-Write r5 r3 op r4 (RAW)
Anti dependence r3 r1 op r2 Write-after-Read r1 r4 op r5 (WAR) Output-dependence r3 r1 op r2 Write-after-Write r5 r3 op r4 (WAW) r3 r6 op r7
How to Handle Data Dependences
Anti and output dependences are easier to handle
write to the destination in one stage and in program order
Flow dependences are more interesting
Five fundamental ways of handling flow dependences
76