Upload
others
View
7
Download
0
Embed Size (px)
Citation preview
Pipelining
Can We Do Better than Microprogrammed Designs?
What limitations do you see with the multi-cycle design?
Limited concurrency Some hardware resources are idle during different phases of
instruction processing cycle “Fetch” logic is idle when an instruction is being “decoded” or
“executed” Most of the datapath is idle when a memory access is
happening
2
Can We Use the Idle Hardware to Improve Concurrency?
Goal: Concurrency throughput (more “work” completed in one cycle)
Idea: When an instruction is using some resources in its processing phase, process other instructions on idle resources not needed by that instruction E.g., when an instruction is being decoded, fetch the next
instruction E.g., when an instruction is being executed, decode another
instruction E.g., when an instruction is accessing data memory (ld/st),
execute the next instruction E.g., when an instruction is writing its result into the register
file, access data memory for the next instruction3
Pipelining: Basic Idea More systematically:
Pipeline the execution of multiple instructions Analogy: “Assembly line processing” of instructions
Idea: Divide the instruction processing cycle into distinct “stages” of
processing Ensure there are enough hardware resources to process one
instruction in each stage Process a different instruction in each stage
Instructions consecutive in program order are processed in consecutive stages
Benefit: Increases instruction processing throughput (1/CPI) Downside: Start thinking about this…
4
Example: Execution of Four Independent ADDs
Multi-cycle: 4 cycles per instruction
Pipelined: 4 cycles per 4 instructions (steady state)Time
F D E WF D E W
F D E WF D E W
F D E WF D E W
F D E WF D E W
Time
Is life always this beautiful?
5
The Laundry Analogy
“place one dirty load of clothes in the washer” “when the washer is finished, place the wet load in the dryer” “when the dryer is finished, take out the dry load and fold” “when folding is finished, ask your roommate (??) to put the clothes
away”- steps to do a load are sequentially dependent- no dependence between different loads- different steps do not share resources
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]6
Pipelining Multiple Loads of Laundry
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
- latency per load is the same- throughput increased by 4
- 4 loads of laundry in parallel- no additional resources
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]7
Pipelining Multiple Loads of Laundry: In Practice
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
the slowest step decides throughput
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]8
Pipelining Multiple Loads of Laundry: In Practice
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
A
BA
B
Throughput restored (2 loads per hour) using 2 dryersPipelining is all about overlapping latencies
Time76 PM 8 9 10 11 12 1 2 AM
A
B
C
D
Task order
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]9
An Ideal Pipeline Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations The same operation is repeated on a large number of different
inputs Repetition of independent operations
No dependencies between repeated operations Uniformly partitionable suboperations
Processing can be evenly divided into uniform-latency suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry What about the instruction processing “cycle”?
10
Ideal Pipelining
combinational logic (F,D,E,M,W)T psec
BW=~(1/T)
BW=~(2/T)T/2 ps (F,D,E) T/2 ps (M,W)
BW=~(3/T)T/3ps (F,D)
T/3ps (E,M)
T/3ps (M,W)
11
More Realistic Pipeline: Throughput Nonpipelined version with delay T
BW = 1/(T+S) where S = latch delay
k-stage pipelined versionBWk-stage = 1 / (T/k +S )BWmax = 1 / (1 gate delay + S )
T ps
T/kps
T/kps
12
More Realistic Pipeline: Cost Nonpipelined version with combinational cost G
Cost = G+L where L = latch cost
k-stage pipelined versionCostk-stage = G + Lk
G gates
G/k G/k
13
Pipelining Instruction Processing
14
Remember: The Instruction Processing Cycle
Fetch Decode Evaluate Address Fetch Operands Execute Store Result
1. Instruction fetch (IF)2. Instruction decode and
register operand fetch (ID/RF)3. Execute/Evaluate memory address (EX/AG)4. Memory operand fetch (MEM)5. Store/writeback result (WB)
15
Remember the Single-Cycle Uarch
Shift left 2
PC
Instruction memory
Read address
Instruction [31– 0]
Data memory
Read data
Write data
RegistersWrite register
Write data
Read data 1
Read data 2
Read register 1
Read register 2
Instruction [15– 11]
Instruction [20– 16]
Instruction [25– 21]
Add
ALU result
Zero
Instruction [5– 0]
MemtoRegALUOpMemWrite
RegWrite
MemReadBranchJumpRegDst
ALUSrc
Instruction [31– 26]
4
M u x
Instruction [25– 0] Jump address [31– 0]
PC+4 [31– 28]
Sign extend
16 32Instruction [15– 0]
1
M u x
1
0
M u x
0
1
M u x
0
1
ALU control
Control
Add ALU result
M u x
0
1 0
ALU
Shift left 226 28
Address
PCSrc2=Br Taken
PCSrc1=Jump
ALU operation
bcond
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
T BW=~(1/T)16
Dividing Into Stages200ps
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Instruction
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read dataAddress
Data memory
1
ALU result
M u x
ALUZero
IF: Instruction fetch ID: Instruction decode/ register file read
EX: Execute/ address calculation
MEM: Memory access WB: Write back
Is this the correct partitioning? – Not balanced (Balancing is difficult)Why not 4 or 6 stages? Why not different boundaries?
100ps 200ps 200ps 100ps
RFwrite
ignorefor now
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]17
Instruction Pipeline Throughput
Instruction fetch Reg ALU Data
access Reg
8 ns Instruction fetch Reg ALU Data
access Reg
8 ns Instruction fetch
8 ns
Time
lw $1, 100($0)
lw $2, 200($0)
lw $3, 300($0)
2 4 6 8 10 12 14 16 18
2 4 6 8 10 12 14
...
Program execution order (in instructions)
Instruction fetch Reg ALU Data
access Reg
Time
lw $1, 100($0)
lw $2, 200($0)
lw $3, 300($0)
2 ns Instruction fetch Reg ALU Data
access Reg
2 ns Instruction fetch Reg ALU Data
access Reg
2 ns 2 ns 2 ns 2 ns 2 ns
Program execution order (in instructions)
200 400 600 800 1000 1200 1400 1600 1800
200 400 600 800 1000 1200 1400
800ps
800ps
800ps
200ps200ps200ps200ps200ps
200ps
200ps
5-stage speedup is 4, not 5 as predicted by the ideal model. Why?Raw latency has been increased for every instruction, downside of not balancing
18
Enabling Pipelined Processing: Pipeline Registers
T
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Instruction
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read dataAddress
Data memory
1
ALU result
M u x
ALUZero
IF: Instruction fetch ID: Instruction decode/ register file read
EX: Execute/ address calculation
MEM: Memory access WB: Write back
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Data memory
Address
No resource is used by more than 1 stage!IR
D
PCF
PCD+
4
PCE+
4
nPC M
A EB E
Imm
E
Aout
MB M
MDR
WAo
utW
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
T/kps
T/kps 19
Pipelined Operation Example
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Instruction fetchlw
Address
Data memory
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX MEM/WB
Instruction decodelw
Address
Data memory
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX MEM/WB
Executionlw
Address
Data memory
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read dataData
memory1
ALU result
M u x
ALUZero
ID/EX MEM/WB
Memorylw
Address
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write data
Read dataData
memory
1
ALU result
M u x
ALUZero
ID/EX MEM/WB
Write backlw
Write register
Address
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0
Address
Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
Data memory
1
ALU result
M u x
ALUZero
ID/EX
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
All instruction classes must follow the same path and timing through the pipeline stages. Any performance impact?
20
Pipelined Operation Example
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Instruction fetchlw $10, 20($1)
Address
Data memory
Clock 1
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Instruction decodelw $10, 20($1)
Instruction fetchsub $11, $2, $3
Address
Data memory
Clock 2
Instruction memory
Address
4
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Executionlw $10, 20($1)
Instruction decodesub $11, $2, $3
3216Sign
extend
Address
Data memory
Clock 3
Instruction memory
Address
4
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
3216Sign
extend
Write register
Write data
Memorylw $10, 20($1)
Read data
1
ALU result
M u x
ALUZero
ID/EX
Executionsub $11, $2, $3
Data memory
Address
Clock 4
Instruction memory
Address
4
32
0
Add Add result
1
ALU result
Zero
Shift left 2
Inst
ruct
ion
IF/ID EX/MEMID/EX MEM/WB
Write backM u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
M u x
ALURead data
Write register
Write data
lw $10, 20($1)
Memorysub $11, $2, $3
Address
Data memory
Clock 5
Instruction memory
Address
4
32
0
Add Add result
1
ALU result
Zero
Shift left 2
Inst
ruct
ion
IF/ID EX/MEMID/EX MEM/WB
Write backM u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
M u x
ALURead data
Write register
Write data
sub $11, $2, $3
Address
Data memory
Clock 6
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
Is life always this beautiful?
21
Illustrating Pipeline Operation: Operation View
MEMEXIDIFInst4
WB
IF
MEM
IF
MEMEX
t0 t1 t2 t3 t4 t5
IDEXIF ID
IF ID
Inst0 IDIFInst1
EXIDIFInst2
MEMEXIDIFInst3
WB
WBMEMEX
WB
22
Illustrating Pipeline Operation: Resource View
I0
I0
I1
I0
I1
I2
I0
I1
I2
I3
I0
I1
I2
I3
I4
I1
I2
I3
I4
I5
I2
I3
I4
I5
I6
I3
I4
I5
I6
I7
I4
I5
I6
I7
I8
I5
I6
I7
I8
I9
I6
I7
I8
I9
I10
t0 t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
IF
ID
EX
MEM
WB
23
Control Points in a Pipeline
PC
Instruction memory
Address
Inst
ruct
ion
Instruction [20– 16]
MemtoReg
ALUOp
Branch
RegDst
ALUSrc
4
16 32Instruction [15– 0]
0
0Registers
Write register
Write data
Read data 1
Read data 2
Read register 1
Read register 2
Sign extend
M u x
1Write data
Read data M
u x
1
ALU control
RegWrite
MemRead
Instruction [15– 11]
6
IF/ID ID/EX EX/MEM MEM/WB
MemWrite
Address
Data memory
PCSrc
Zero
Add Add result
Shift left 2
ALU result
ALUZero
Add
0
1
M u x
0
1
M u x
Identical set of control points as the single-cycle datapath!!
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
24
Control Signals in a Pipeline For a given instruction
same control signals as single-cycle, but control signals required at different cycles, depending on stage⇒ decode once using the same logic as single-cycle and buffer control
signals until consumed
⇒ or carry relevant “instruction word/field” down the pipeline and decode locally within each or in a previous stage
Which one is better?
Control
EX
M
WB
M
WB
WB
IF/ID ID/EX EX/MEM MEM/WB
Instruction
25
Pipelined Control Signals
PC
Instruction memory
Inst
ruct
ion
Add
Instruction [20– 16]
Mem
toR
eg
ALUOp
Branch
RegDst
ALUSrc
4
16 32Instruction [15– 0]
0
0
M u x
0
1
Add Add result
RegistersWrite register
Write data
Read data 1
Read data 2
Read register 1
Read register 2
Sign extend
M u x
1
ALU result
Zero
Write data
Read data
M u x
1
ALU control
Shift left 2
Reg
Writ
e
MemRead
Control
ALU
Instruction [15– 11]
6
EX
M
WB
M
WB
WBIF/ID
PCSrc
ID/EX
EX/MEM
MEM/WB
M u x
0
1
Mem
Writ
e
AddressData
memory
Address
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
26
An Ideal Pipeline Goal: Increase throughput with little increase in cost
(hardware cost, in case of instruction processing)
Repetition of identical operations The same operation is repeated on a large number of different
inputs Repetition of independent operations
No dependencies between repeated operations Uniformly partitionable suboperations
Processing an be evenly divided into uniform-latency suboperations (that do not share resources)
Fitting examples: automobile assembly line, doing laundry What about the instruction processing “cycle”?
27
Instruction Pipeline: Not An Ideal Pipeline Identical operations ... NOT!
⇒ different instructions do not need all stages- Forcing different instructions to go through the same multi-function pipe external fragmentation (some pipe stages idle for some instructions)
Uniform suboperations ... NOT!
⇒ difficult to balance the different pipeline stages- Not all pipeline stages do the same amount of work internal fragmentation (some pipe stages are too fast but all take the
same clock cycle time)
Independent operations ... NOT!⇒ instructions are not independent of each other
- Need to detect and resolve inter-instruction dependencies to ensure the pipeline operates correctly Pipeline is not always moving (it stalls)
28
Issues in Pipeline Design Balancing work in pipeline stages
How many stages and what is done in each stage
Keeping the pipeline correct, moving, and full in the presence of events that disrupt pipeline flow Handling dependences
Data Control
Handling resource contention Handling long-latency (multi-cycle) operations
Handling exceptions, interrupts
Advanced: Improving pipeline throughput Minimizing stalls
29
Causes of Pipeline Stalls Resource contention
Dependences (between instructions) Data Control
Long-latency (multi-cycle) operations
30
Dependences and Their Types Also called “dependency” or less desirably “hazard”
Dependencies dictate ordering requirements between instructions
Two types Data dependence Control dependence
Resource contention is sometimes called resource dependence However, this is not fundamental to (dictated by) program
semantics, so we will treat it separately31
Handling Resource Contention Happens when instructions in two pipeline stages need the
same resource
Solution 1: Eliminate the cause of contention Duplicate the resource or increase its throughput
E.g., use separate instruction and data memories (caches) E.g., use multiple ports for memory structures
Solution 2: Detect the resource contention and stall one of the contending stages Which stage do you stall? Example: What if you had a single read and write port for the
register file?
32
Data Dependences Types of data dependences
Flow dependence (true data dependence – read after write) Output dependence (write after write) Anti dependence (write after read)
Which ones cause stalls in a pipelined machine? For all of them, we need to ensure semantics of the program
is correct Flow dependences always need to be obeyed because they
constitute true dependence on a value Anti and output dependences exist due to limited number of
architectural registers They are dependence on a name, not a value We will later see what we can do about them
33
Data Dependence Types
Flow dependencer3 ← r1 op r2 Read-after-Writer5 ← r3 op r4 (RAW)
Anti dependencer3 ← r1 op r2 Write-after-Readr1 ← r4 op r5 (WAR)
Output-dependencer3 ← r1 op r2 Write-after-Writer5 ← r3 op r4 (WAW)r3 ← r6 op r7
34
Pipelined Operation Example
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Instruction fetchlw $10, 20($1)
Address
Data memory
Clock 1
Instruction memory
Address
4
32
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Instruction decodelw $10, 20($1)
Instruction fetchsub $11, $2, $3
Address
Data memory
Clock 2
Instruction memory
Address
4
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
Write register
Write data
Read data
1
ALU result
M u x
ALUZero
ID/EX
Executionlw $10, 20($1)
Instruction decodesub $11, $2, $3
3216Sign
extend
Address
Data memory
Clock 3
Instruction memory
Address
4
0
Add Add result
Shift left 2
Inst
ruct
ion
IF/ID EX/MEM MEM/WB
M u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
3216Sign
extend
Write register
Write data
Memorylw $10, 20($1)
Read data
1
ALU result
M u x
ALUZero
ID/EX
Executionsub $11, $2, $3
Data memory
Address
Clock 4
Instruction memory
Address
4
32
0
Add Add result
1
ALU result
Zero
Shift left 2
Inst
ruct
ion
IF/ID EX/MEMID/EX MEM/WB
Write backM u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
M u x
ALURead data
Write register
Write data
lw $10, 20($1)
Memorysub $11, $2, $3
Address
Data memory
Clock 5
Instruction memory
Address
4
32
0
Add Add result
1
ALU result
Zero
Shift left 2
Inst
ruct
ion
IF/ID EX/MEMID/EX MEM/WB
Write backM u x
0
1
Add
PC
0Write data
M u x
1Registers
Read data 1
Read data 2
Read register 1
Read register 2
16Sign
extend
M u x
ALURead data
Write register
Write data
sub $11, $2, $3
Address
Data memory
Clock 6
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
What if the SUB were dependent on LW?
35
Data Dependence Handling
36
Readings for Next Few Lectures P&H Chapter 4.9-4.11
Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proceedings of the IEEE, 1995 More advanced pipelining Interrupt and exception handling Out-of-order and superscalar execution concepts
37
How to Handle Data Dependences Anti and output dependences are easier to handle
write to the destination in one stage and in program order No problem unless reordered
Flow dependences are more interesting
Five fundamental ways of handling flow dependences Detect and wait until value is available in register file Detect and forward/bypass data to dependent instruction
Dependent instruction can progress till it needs the value Detect and eliminate the dependence at the software level
No need for the hardware to detect dependence Predict the needed value(s), execute “speculatively”, and verify
Loading an array initialized to 0 [hardware table] Do something else (fine-grained multithreading)
Every cycle, fetch from a different thread [multiple PCs, register files..] Fetch stage has multiple PCs and a MUX, no two instances of the same thread No need to detect
38
Interlocking Detection of dependence between instructions in a
pipelined processor to guarantee correct execution
Software based interlockingvs.
Hardware based interlocking
MIPS acronym?
39
Approaches to Dependence Detection (I) Scoreboarding
Each register in register file has a Valid bit associated with it An instruction that is writing to the register resets the Valid bit An instruction in Decode stage checks if all its source and
destination registers are Valid Yes: No need to stall… No dependence No: Stall the instruction
Advantage: Simple. 1 bit per register
Disadvantage: Need to stall for all types of dependences, not only flow dep.
40
Not Stalling on Anti and Output Dependences
What changes would you make to the scoreboard to enable this?
41
Approaches to Dependence Detection (II) Combinational dependence check logic
Special logic that checks if any instruction in later stages is supposed to write to any source register of the instruction that is being decoded
Yes: stall the instruction/pipeline No: no need to stall… no flow dependence
Advantage: No need to stall on anti and output dependences
Disadvantage: Logic is more complex than a scoreboard Logic becomes more complex as we make the pipeline deeper
and wider (flash-forward: think superscalar execution)42
Once You Detect the Dependence in Hardware
What do you do afterwards?
Observation: Dependence between two instructions is detected before the communicated data value becomes available
Option 1: Stall the dependent instruction right away Option 2: Stall the dependent instruction only when
necessary data forwarding/bypassing Option 3: …
43
Data Forwarding/Bypassing Problem: A consumer (dependent) instruction has to wait in
decode stage until the producer instruction writes its value in the register file
Goal: We do not want to stall the pipeline unnecessarily
Observation: The data value needed by the consumer instruction can be supplied directly from a later stage in the pipeline (instead of only from the register file)
Idea: Add additional dependence check logic and data forwarding paths (buses) to supply the producer’s value to the consumer right after the value is available
Benefit: Consumer can move in the pipeline until the point the value can be supplied less stalling
44
A Special Case of Data Dependence Control dependence
Data dependence on the Instruction Pointer / Program Counter
45
Control Dependence Question: What should the fetch PC be in the next cycle? Answer: The address of the next instruction
All instructions are control dependent on previous ones. Why?
If the fetched instruction is a non-control-flow instruction: Next Fetch PC is the address of the next-sequential instruction Easy to determine if we know the size of the fetched instruction
If the instruction that is fetched is a control-flow instruction: How do we determine the next Fetch PC?
In fact, how do we know whether or not the fetched instruction is a control-flow instruction? [Pre-decoded Icache]
46
Data and Control Dependence Handling
Readings for Next Few Lectures P&H Chapter 4.9-4.11
Smith and Sohi, “The Microarchitecture of Superscalar Processors,” Proceedings of the IEEE, 1995 More advanced pipelining Interrupt and exception handling Out-of-order and superscalar execution concepts
McFarling, “Combining Branch Predictors,” DEC WRL Technical Report, 1993.
Kessler, “The Alpha 21264 Microprocessor,” IEEE Micro 1999.
48
Data Dependence Handling: More Depth & Implementation
49
Remember: Data Dependence Types
Flow dependencer3 ← r1 op r2 Read-after-Writer5 ← r3 op r4 (RAW)
Anti dependencer3 ← r1 op r2 Write-after-Readr1 ← r4 op r5 (WAR)
Output-dependencer3 ← r1 op r2 Write-after-Writer5 ← r3 op r4 (WAW)r3 ← r6 op r7
50
How to Handle Data Dependences Anti and output dependences are easier to handle
write to the destination in one stage and in program order
Flow dependences are more interesting
Five fundamental ways of handling flow dependences Detect and wait until value is available in register file Detect and forward/bypass data to dependent instruction Detect and eliminate the dependence at the software level
No need for the hardware to detect dependence Predict the needed value(s), execute “speculatively”, and verify Do something else (fine-grained multithreading)
No need to detect51
RAW Dependence Handling Following flow dependences lead to conflicts in the 5-stage
pipeline
MEM
WBIF ID
IF
EX
ID
MEM
EX WB
addi ra r- -
addi r- ra -
MEMIF ID EX
IF ID EX
IF ID
IF
addi r- ra -
addi r- ra -
addi r- ra -
addi r- ra -
?
52
Register Data Dependence Analysis
For a given pipeline, when is there a potential conflict between 2 data dependent instructions? dependence type: RAW, WAR, WAW? instruction types involved? distance between the two instructions?
R/I-Type LW SW Br J Jr
IF
ID read RF read RF read RF read RF read RF
EX
MEM
WB write RF write RF
53
Safe and Unsafe Movement of Pipeline
i:rk←_
j:_←rk Reg Read
Reg Write
iOj
stage X
stage Y
dist(i,j) ≤ dist(X,Y) ⇒ ??dist(i,j) > dist(X,Y) ⇒ ??
RAW Dependence
i:_←rk
j:rk←_ Reg Write
Reg Read
iAj
WAR Dependence
i:rk←_
j:rk←_ Reg Write
Reg Write
iDj
WAW Dependence
dist(i,j) ≤ dist(X,Y) ⇒ Unsafe to keep j movingdist(i,j) > dist(X,Y) ⇒ Safe 54
RAW Dependence Analysis Example
Instructions IA and IB (where IA comes before IB) have RAW dependence iff IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW) dist(IA, IB) ≤ dist(ID, WB) = 3
What about WAW and WAR dependence?What about memory data dependence?
R/I-Type LW SW Br J JrIFID read RF read RF read RF read RF read RFEX
MEMWB write RF write RF
55
Pipeline Stall: Resolving Data Dependence
IF
WB
IF ID ALU MEMIF ID ALU MEM
IF ID ALU MEMIF ID ALU
t0 t1 t2 t3 t4 t5
IF ID MEMIF ID ALU
IF ID
InstiInstjInstkInstl
WBWB
i: rx ← _j: _ ← rx dist(i,j)=1
ij
Insth
WBMEMALU
i: rx ← _bubblej: _ ← rx dist(i,j)=2
WB
IF ID ALU MEMIF ID ALU MEM
IF ID ALU MEMIF ID ALU
t0 t1 t2 t3 t4 t5
MEM
InstiInstjInstkInstl
WBWBi
j
Insth
IDIF
IF
IF ID ALUIF ID
i: rx ← _bubblebubblej: _ ← rx dist(i,j)=3
IF
IF ID ALU MEMIF ID ALU MEM
IF ID ALUIF ID
t0 t1 t2 t3 t4 t5
IF
MEMALUID
InstiInstjInstkInstl
WBWBi
j
Insth
IDIF
IDIF
i: rx ← _bubblebubblebubblej: _ ← rx dist(i,j)=4
IF
IF ID ALU MEMIF ID ALU MEM
IF IDIF
t0 t1 t2 t3 t4 t5
ALUID
InstiInstjInstkInstl
WBWBi
j
Insth
IDIF
IDIF
IDIF
Stall==make the dependent instruction wait until its source data value is available
1. stop all up-stream stages2. drain all down-stream stages
56
How to Implement Stalling
Stall disable PC and IR latching; ensure stalled instruction stays in its stage Insert “invalid” instructions/nops into the stage following the stalled one Valid bit in the pipelined register gated with the subsequent stages (all logic which updates the state) / Control
logic issues a nop instruction
PC
Instruction memory
Inst
ruct
ion
Add
Instruction [20– 16]
Mem
toR
eg
ALUOp
Branch
RegDst
ALUSrc
4
16 32Instruction [15– 0]
0
0
M u x
0
1
Add Add result
RegistersWrite register
Write data
Read data 1
Read data 2
Read register 1
Read register 2
Sign extend
M u x
1
ALU result
Zero
Write data
Read data
M u x
1
ALU control
Shift left 2
Reg
Writ
e
MemRead
Control
ALU
Instruction [15– 11]
6
EX
M
WB
M
WB
WBIF/ID
PCSrc
ID/EX
EX/MEM
MEM/WB
M u x
0
1
Mem
Writ
e
AddressData
memory
Address
Based on original figure from [P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]57
Stall Conditions Instructions IA and IB (where IA comes before IB) have RAW
dependence iff IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW) dist(IA, IB) ≤ dist(ID, WB) = 3
In other words, must stall when IB in ID stage wants to read a register to be written by IA in EX, MEM or WB stage
58
Stall Conditions Helper functions
rs(I) returns the rs field of I use_rs(I) returns true if I requires RF[rs] and rs!=r0
Stall when (rs(IRID)==destEX) && use_rs(IRID) && RegWriteEX or (rs(IRID)==destMEM) && use_rs(IRID) && RegWriteMEM or (rs(IRID)==destWB) && use_rs(IRID) && RegWriteWB or (rt(IRID)==destEX) && use_rt(IRID) && RegWriteEX or (rt(IRID)==destMEM) && use_rt(IRID) && RegWriteMEM or (rt(IRID)==destWB) && use_rt(IRID) && RegWriteWB
It is crucial that the EX, MEM and WB stages continue to advance normally during stall cycles
59
Impact of Stall on Performance Each stall cycle corresponds to one lost cycle in which no
instruction can be completed
For a program with N instructions and S stall cycles, Average CPI=(N+S)/N
S depends on frequency of RAW dependences exact distance between the dependent instructions distance between dependences
suppose i1,i2 and i3 all depend on i0, once i1’s dependence is resolved, i2 and i3 must be okay too
60
Sample Assembly (P&H) for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... }
addi $s1, $s0, -1for2tst: slti $t0, $s1, 0
bne $t0, $zero, exit2sll $t1, $s1, 2add $t2, $a0, $t1lw $t3, 0($t2)lw $t4, 4($t2)slt $t0, $t4, $t3beq $t0, $zero, exit2.........addi $s1, $s1, -1j for2tst
exit2:
3 stalls3 stalls
3 stalls3 stalls
3 stalls3 stalls
61
Data Forwarding (or Data Bypassing) It is intuitive to think of RF as state
“add rx ry rz” literally means get values from RF[ry] and RF[rz]respectively and put result in RF[rx]
But, RF is just a part of a communication abstraction “add rx ry rz” means 1. get the results of the last instructions to
define the values of RF[ry] and RF[rz], respectively, and 2. until another instruction redefines RF[rx], younger instructions that refer to RF[rx] should use this instruction’s result
What matters is to maintain the correct “dataflow” between operations, thus
ID ID IDIF ID
WBIF ID EX MEMadd ra r- r-
addi r- ra r- MEMIF EX WB62
Resolving RAW Dependence with Forwarding Instructions IA and IB (where IA comes before IB) have RAW
dependence iff IB (R/I, LW, SW, Br or JR) reads a register written by IA (R/I or LW) dist(IA, IB) ≤ dist(ID, WB) = 3
In other words, if IB in ID stage reads a register written by IA in EX, MEM or WB stage, then the operand required by IB is not yet in RF⇒ retrieve operand from datapath instead of the RF⇒ retrieve operand from the youngest definition if multiple definitions are outstanding
63
Data Forwarding Paths (v1)
Registers
M u x M
u x
ALU
ID/EX MEM/WB
Data memory
M u x
Forwarding unit
EX/MEM
b. With forwarding
ForwardB
Rd EX/MEM.RegisterRd
MEM/WB.RegisterRd
RtRtRs
ForwardA
M u x
dist(i,j)=1dist(i,j)=2
dist(i,j)=3
dist(i,j)=3
internal forward?
[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]64
Data Forwarding Paths (v2)
Registers
M u x M
u x
ALU
ID/EX MEM/WB
Data memory
M u x
Forwarding unit
EX/MEM
b. With forwarding
ForwardB
Rd EX/MEM.RegisterRd
MEM/WB.RegisterRd
RtRtRs
ForwardA
M u x
Assumes RF forwards internally[Based on original figure from P&H CO&D, COPYRIGHT 2004 Elsevier. ALL RIGHTS RESERVED.]
dist(i,j)=1dist(i,j)=2
dist(i,j)=3
65
Data Forwarding Logic (for v2)if (rsEX!=0) && (rsEX==destMEM) && RegWriteMEM then
forward operand from MEM stage // dist=1else if (rsEX!=0) && (rsEX==destWB) && RegWriteWB then
forward operand from WB stage // dist=2else
use AEX (operand from register file) // dist >= 3
Ordering matters!! Must check youngest match first
Why doesn’t use_rs( ) appear in the forwarding logic?
What does the above not take into account?
66
Data Forwarding (Dependence Analysis)
Even with data-forwarding, RAW dependence on an immediately preceding LW instruction requires a stall
R/I-Type LW SW Br J Jr
IF
ID use
EX useproduce use use use
MEM produce (use)
WB
67
Sample Assembly, No Forwarding (P&H) for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... }
addi $s1, $s0, -1for2tst: slti $t0, $s1, 0
bne $t0, $zero, exit2sll $t1, $s1, 2add $t2, $a0, $t1lw $t3, 0($t2)lw $t4, 4($t2)slt $t0, $t4, $t3beq $t0, $zero, exit2.........addi $s1, $s1, -1j for2tst
exit2:
3 stalls3 stalls
3 stalls3 stalls
3 stalls3 stalls
68
Sample Assembly, Revisited (P&H) for (j=i-1; j>=0 && v[j] > v[j+1]; j-=1) { ...... }
addi $s1, $s0, -1for2tst: slti $t0, $s1, 0
bne $t0, $zero, exit2sll $t1, $s1, 2add $t2, $a0, $t1lw $t3, 0($t2)lw $t4, 4($t2)nopslt $t0, $t4, $t3beq $t0, $zero, exit2.........addi $s1, $s1, -1j for2tst
exit2:69
Pipelining the LC-3b
70
Pipelining the LC-3b Let’s remember the single-bus datapath
We’ll divide it into 5 stages Fetch Decode/RF Access Address Generation/Execute Memory Store Result
Conservative handling of data and control dependences Stall on branch Stall on flow dependence
71
An Example LC-3b Pipeline
73
74
75
76
77
78
Control of the LC-3b Pipeline Three types of control signals
Datapath Control Signals Control signals that control the operation of the datapath
Control Store Signals Control signals (microinstructions) stored in control store to be
used in pipelined datapath (can be propagated to stages later than decode)
Stall Signals Ensure the pipeline operates correctly in the presence of
dependencies
79
80
Control Store in a Pipelined Machine
81
Pipeline stall: Pipeline does not move because an operation in a stage cannot complete
Stall Signals: Ensure the pipeline operates correctly in the presence of such an operation
Why could an operation in a stage not complete?
Stall Signals
82
Pipelined LC-3b http://www.ece.cmu.edu/~ece447/s14/lib/exe/fetch.php?m
edia=18447-lc3b-pipelining.pdf
83
End of Pipelining the LC-3b
84
Questions to Ponder What is the role of the hardware vs. the software in data
dependence handling? Software based interlocking Hardware based interlocking Who inserts/manages the pipeline bubbles? Who finds the independent instructions to fill “empty” pipeline
slots? What are the advantages/disadvantages of each?
85
Questions to Ponder What is the role of the hardware vs. the software in the
order in which instructions are executed in the pipeline? Software based instruction scheduling static scheduling Hardware based instruction scheduling dynamic scheduling
86
More on Software vs. Hardware Software based scheduling of instructions static scheduling
Compiler orders the instructions, hardware executes them in that order
Contrast this with dynamic scheduling (in which hardware will execute instructions out of the compiler-specified order)
How does the compiler know the latency of each instruction?
What information does the compiler not know that makes static scheduling difficult? Answer: Anything that is determined at run time
Variable-length operation latency, memory addr, branch direction
How can the compiler alleviate this (i.e., estimate the unknown)? Answer: Profiling
87
Control Dependence Handling
88
Review: Control Dependence Question: What should the fetch PC be in the next cycle? Answer: The address of the next instruction
All instructions are control dependent on previous ones. Why?
If the fetched instruction is a non-control-flow instruction: Next Fetch PC is the address of the next-sequential instruction Easy to determine if we know the size of the fetched instruction
If the instruction that is fetched is a control-flow instruction: How do we determine the next Fetch PC?
In fact, how do we even know whether or not the fetched instruction is a control-flow instruction?
89
Branch TypesType Direction at
fetch timeNumber of possible next fetch addresses?
When is next fetch address resolved?
Conditional Unknown 2 Execution (register dependent)
Unconditional Always taken 1 Decode (PC + offset)
Call Always taken 1 Decode (PC + offset)
Return Always taken Many Execution (register dependent)
Indirect Always taken Many Execution (register dependent)
Different branch types can be handled differently
90
How to Handle Control Dependences Critical to keep the pipeline full with correct sequence of
dynamic instructions.
Potential solutions if the instruction is a control-flow instruction:
Stall the pipeline until we know the next fetch address Guess the next fetch address (branch prediction) Employ delayed branching (branch delay slot) Do something else (fine-grained multithreading) Eliminate control-flow instructions (predicated execution) Fetch from both possible paths (if you know the addresses
of both possible paths) (multipath execution)91
Stall Fetch Until Next PC is Available: Good Idea?
IFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth IDIFIF
IFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth ID ALUIFIF
IFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth ID ALUIF
MEMIDIF
IFIFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth ID ALUIF
MEMIDIF
WBALU
IFIF
IFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth ID ALUIF
MEMIDIF
WBALU
IFMEM
IDIF
IFIFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth ID ALUIF
MEMIDIF
WBALU
IFMEM
IDIF
WBALU
IF
IFt0 t1 t2 t3 t4 t5
InstiInstjInstkInstl
Insth
This is the case with non-control-flow and unconditional br instructions!92
Doing Better than Stalling Fetch … Rather than waiting for true-dependence on PC to resolve, just
guess nextPC = PC+4 to keep fetching every cycleIs this a good guess?What do you lose if you guessed incorrectly?
~20% of the instruction mix is control flow ~50 % of “forward” control flow (i.e., if-then-else) is taken ~90% of “backward” control flow (i.e., loop back) is taken
Overall, typically ~70% taken and ~30% not taken[Lee and Smith, 1984]
Expect “nextPC = PC+4” ~86% of the time, but what about the remaining 14%?
93
Guessing NextPC = PC + 4 Always predict the next sequential instruction is the next
instruction to be executed This is a form of next fetch address prediction and branch
prediction
How can you make this more effective?
Idea: Maximize the chances that the next sequential instruction is the next instruction to be executed Software: Lay out the control flow graph such that the “likely
next instruction” is on the not-taken path of a branch Hardware: ??? (how can you do this in hardware…)
94
Guessing NextPC = PC + 4 How else can you make this more effective?
Idea: Get rid of control flow instructions (or minimize their occurrence)
How?1. Get rid of unnecessary control flow instructions combine predicates (predicate combining)2. Convert control dependences into data dependences predicated execution
95
Predicate Combining (not Predicated Execution)
Complex predicates are converted into multiple branches if ((a == b) && (c < d) && (a > 5000)) { … }
3 conditional branches Problem: This increases the number of control
dependencies Idea: Combine predicate operations to feed a single branch
instruction instead of having one branch for each Predicates stored and operated on using condition registers A single branch checks the value of the combined predicate
+ Fewer branches in code fewer mipredictions/stalls-- Possibly unnecessary work
-- If the first predicate is false, no need to compute other predicates Condition registers exist in IBM RS6000 and the POWER architecture
96
Predicated Execution Idea: Convert control dependence to data dependence
Suppose we had a Conditional Move instruction… CMOV condition, R1 R2 R1 = (condition == true) ? R2 : R1 Employed in most modern ISAs (x86, Alpha)
Code example with branches vs. CMOVsif (a == 5) {b = 4;} else {b = 3;}
CMPEQ condition, a, 5;CMOV condition, b 4;CMOV !condition, b 3;
97
Conditional Execution in ARM Same as predicated execution
Every instruction is conditionally executed
98
Predicated Execution Eliminates branches enables straight line code (i.e.,
larger basic blocks in code)
Advantages Always-not-taken prediction works better (no branches) Compiler has more freedom to optimize code (no branches)
control flow does not hinder inst. reordering optimizations code optimizations hindered only by data dependencies
Disadvantages Useless work: some instructions fetched/executed but
discarded (especially bad for easy-to-predict branches) Requires additional ISA support
Can we eliminate all branches this way?99
Predicated Execution We will get back to this…
Some readings (optional): Allen et al., “Conversion of control dependence to data
dependence,” POPL 1983. Kim et al., “Wish Branches: Combining Conditional Branching
and Predication for Adaptive Predicated Execution,” MICRO 2005.
100