Upload
brandon-gordon
View
218
Download
0
Embed Size (px)
DESCRIPTION
EE472 – Fall 2007P. Chiang with slides from C. Kozyrakis (Stanford) Lecture Introduction This course is all about how computers work But what do we mean by a computer? –Different types: desktop, servers, embedded devices –Different uses: automobiles, graphics, finance, genomics… –Different manufacturers: Intel, Apple, IBM, Microsoft, Sun… –Different underlying technologies and different costs! Analogy: Consider a course on “automotive vehicles” –Many similarities from vehicle to vehicle (e.g., wheels) –Huge differences from vehicle to vehicle (e.g., gas vs. electric) Best way to learn: –Focus on a specific instance and learn how it works –While learning general principles and historical perspectives
Citation preview
EE472 – Spring 2007 P. Chiang, with Slide Help from C. Kozyrakis (Stanford)
Department of Electrical EngineeringOregon State University
http://eecs.oregonstate.edu/~pchiang
ECE472Computer Architecture
Lecture #2—Sep. 26, 2007
Patrick ChiangTA: Kang-Min Hu
EE472 – Spring 2007 P. Chiang, with Slide Help from C. Kozyrakis (Stanford)
Department of Electrical EngineeringOregon State University
http://eecs.oregonstate.edu/~pchiang
Chapter 1
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 3
Introduction
• This course is all about how computers work
• But what do we mean by a computer?– Different types: desktop, servers, embedded devices
– Different uses: automobiles, graphics, finance, genomics…
– Different manufacturers: Intel, Apple, IBM, Microsoft, Sun…
– Different underlying technologies and different costs!
• Analogy: Consider a course on “automotive vehicles”– Many similarities from vehicle to vehicle (e.g., wheels)
– Huge differences from vehicle to vehicle (e.g., gas vs. electric)
• Best way to learn:– Focus on a specific instance and learn how it works
– While learning general principles and historical perspectives
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 4
Why learn this stuff?
• You want to call yourself a “computer scientist”
• You want to build software people use (need performance)
• You need to make a purchasing decision or offer “expert” advice
• Both Hardware and Software affect performance:– Algorithm determines number of source-level statements
– Language/Compiler/Architecture determine machine instructions(Chapter 2 and 3)
– Processor/Memory determine how fast instructions are executed(Chapter 5, 6, and 7)
• Assessing and Understanding Performance in Chapter 4
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 5
What is a computer?
• Components:– input (mouse, keyboard)– output (display, printer)– memory (disk drives, DRAM, SRAM, CD)– network
• Our primary focus: the processor (datapath and control)– implemented using millions of transistors– Impossible to understand by looking at each transistor– We need...
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 6
Abstraction
• Delving into the depths reveals more information
• An abstraction omits unneeded detail, helps us cope with complexity
What are some of the details that appear in these familiar abstractions?
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 7
How do computers work?
• Need to understand abstractions such as:– Applications software– Systems software– Assembly Language– Machine Language– Architectural Issues: i.e., Caches, Virtual Memory, Pipelining– Sequential logic, finite state machines– Combinational logic, arithmetic circuits– Boolean logic, 1s and 0s– Transistors used to build logic gates (CMOS)– Semiconductors/Silicon used to build transistors– Properties of atoms, electrons, and quantum dynamics
• So much to learn!
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 8
Instruction Set Architecture
• A very important abstraction– interface between hardware and low-level software
– standardizes instructions, machine language bit patterns, etc.
– advantage: different implementations of the same architecture– disadvantage: sometimes prevents using new innovations
True or False: Binary compatibility is extraordinarily important?
• Modern instruction set architectures:– IA-32, PowerPC, MIPS, SPARC, ARM, and others
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 9
Historical Perspective
• ENIAC built in World War II was the first general purpose computer– Used for computing artillery firing tables– 80 feet long by 8.5 feet high and several feet wide– Each of the twenty 10 digit registers was 2 feet long– Used 18,000 vacuum tubes– Performed 1900 additions per second
–Since then:
Moore’s Law:
transistor capacity doubles every 18-24 months
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 10
• Lecture #2 – Sep. 27, 2007
• Notes: Course Notes at: eecs.oregonstate.edu/~pchiang/under ECE472
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 11
Today’s Lecture
• Review of Tuesday’s class– Computer architecture is on the brink of major upheaval
• Multi-core computing
– Computer Systems Performance Metrics• Execution Time• Power• Cost
• Today’s lecture material– Benchmarks – how to evaluate computer performance– MIPS Assembly Language
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 12
Examples
• Latency metric: program execution time in seconds
– Your system architecture can affect all of them• CPI: memory latency, IO latency, …• CCT: cache organization, … • IC: OS overhead, …
CycleSeconds
ogramCycles
ogramSecondsCPUtime
PrPr
CycleSeconds
nInstructioCycles
ogramnsInstructio
Pr
CCTCPIIC
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 13
A is Faster than B?
• Given the CPUtime for machines A and B, A is X times faster than B means:
• Example, CPUtimeA=3.4sec & CPUtimeB=5.3sec then– A is 5.3/3.4=1.55 times faster than B or 55% faster
• If you start with bandwidth metrics of performance, use inverse ratio
A
B
CPUTimeCPUTimeX
B
A
BandWidthBandWidthX
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 14
Speedup and Amdahl’s Law
• Speedup = CPUtimeold / CPUtimenew
• Given an optimization x that accelerates fraction fx of program by a factor
of Sx, how much is the overall speedup?
• Lesson’s from Amdhal’s law– Make common cases fast: as fx→1, speedup→Sx
– But don’t overoptimize common case: as Sx→, speedup→ 1 / (1-fx)• Speedup is limited by the fraction of the code that can be accelerated• Uncommon case will eventually become the common one
x
xx
x
xxold
old
new
old
Sff
SffCPUTime
CPUTimeCPUTimeCPUTimeSpeedup
)1(
1
])1[(
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 15
Amdahl’s Law Example
• If Sx=100, what is the overall speedup as a function of fx?
Speedup vs Optimized Fraction
0
10
20
30
40
50
60
70
80
90
100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1Fraction of Code Optimized
Spee
dup
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 16
Cost of Integrated Circuits
Die_areasity Defect_Den 1 dWafer_yiel YieldDie
d test yielFinalcost Packagingcost Testing cost Die cost IC
yield DieWafer per DiescostWafer cost Die
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 17
Power
• Power = C(capacitance)*Vdd2*f(frequency)
• Execution Time
• Conflicting goals:– Execution time goes down
but power goes up!– Really exponential power increase
• Ways to solve this problem?
CycleSeconds
nInstructioCycles
ogramnsInstructio
Pr
• Operate on N instructions in parallel– Clock Frequency => f/N– Keep clock frequency the same or reduce it
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 18
Evaluating Performance
• What do we mean by “performance?”
• How do we select benchmark programs?
• How do we summarize performance across a suite of programs?– When to use the different types of means– Statistics for architects
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 19
What is Performance?
• Unlike cost, depends on the program you run. Can be stated in terms of execution time or bandwidth.
• Given execution time for machines A and B, “A is X times faster than B” means:
X is called the speedup of A over B.
• Example: time(A)=3.4sec & time(B)=5.3sec for some program. – A is 5.3/3.4=1.55 times faster than B or 55% faster
• For bandwidth metrics of performance, use inverse ratio
A
B
CPUTimeCPUTimeX
B
A
BandWidthBandWidthX
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 20
Choosing Benchmark Programs
• Criteria– Representative of real workloads in some way– Hard to “cheat” (i.e. get deceptively good performance that will never be
seen in real life)
• Best solution: run substantial, real-world programs– Representative because real– Improvements on these programs = improvements in the real world– …but require more effort than “toy benchmarks”
• Examples:– SPEC CPU integer/floating-point suites– TPC transaction processing benchmarks
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 21
Benchmarks
• Scientific computing: Linpack, SpecOMP, SpecHPC, …• Embedded benchmarks: EEMBC, Dhrystone, …• Enterprise computing
– TCP-C, TPC-W, TPC-H– SpecJbb, SpecSFS, SpecMail, Streams,…– MinuteSort, PennySort, …
• Other– 3Dmark, ScienceMark, Winstone, iBench, AquaMark, …
• Caveats:
– Your results will be as good as your benchmarks– Make sure you know what the benchmark is designed to measure– Performance is not the only metric for computing systems
• Cost, power consumption, reliability, real-time performance, …– Predicting the real-world programs/datasets for 3 years from now
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 22
How do you summarize performance?
• Combining different benchmark results into 1 number: sometimes misleading, always controversial…and inevitable
• 3 types of means– Arithmetic: for times– Harmonic: for rates– Geometric: for ratios
• Statistics for architects: benchmark suites as samples of a population– Distributions– Confidence intervals
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 23
(Weighted) Arithmetic Mean
n
iii TimeWeight
n 1
1
Machine A Machine B Speedup (B over A)
Prog. 1 (sec) 1 10 0.1
Prog. 2 (sec) 1000 100 10
Mean (50/50) 500.5 55 9.1
Mean (75/25) 250.75 32.5 7.7
• If you know your exact workload (benchmarks & relative frequencies), this is the right way to summarize performance.
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 24
(Weighted) Harmonic Mean
n
i i
i
RateWeight
nHM
1
• Exactly analogous, but for averaging rates (work / unit time).
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 25
Geometric mean: used for ratios
n
iRatioiGM
n
1
1
• Used by SPEC CPU suite. To avoid questions of how to weight benchmarks, normalize Machine A’s performance on each benchmark i to the performance of some reference machine Ref:
RefTimeMachineATimeSPECRatio
i
ii ,
,
and report GM of ratios as final result.
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 26
Pros and Cons of Geometric Mean
• Pros: Ratio of means = mean of ratios
• Cons:– No intuitive physical meaning– Can’t be related back to execution time
YGMXGM
YXGM
EE472 – Spring 2007 P. Chiang, with Slide Help from C. Kozyrakis (Stanford)
Department of Electrical EngineeringOregon State University
http://eecs.oregonstate.edu/~pchiang
Chapter 2
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 28
Instructions:
• Language of the Machine
• We’ll be working with the MIPS instruction set architecture– similar to other architectures developed since the 1980's– Almost 100 million MIPS processors manufactured in 2002– used by NEC, Nintendo, Cisco, Silicon Graphics, Sony, …1400
1300
1200
1100
1000
900
800
700
600
500
400
300
200
100
01998 2000 2001 20021999
OtherSPARCHitachi SHPowerPCMotorola 68KMIPSIA-32ARM
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 29
MIPS arithmetic
• All instructions have 3 operands
• Operand order is fixed (destination first)
Example:
C code: a = b + c
MIPS ‘code’: add a, b, c
(we’ll talk about registers in a bit)
“The natural number of operands for an operation like addition is three…requiring every instruction to have exactly three operands, no more and no less, conforms to the philosophy of keeping the hardware simple”
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 30
MIPS arithmetic
• Design Principle: simplicity favors regularity.
• Of course this complicates some things...
C code:a = b + c + d;
MIPS code: add a, b, cadd a, a, d
• Operands must be registers, only 32 registers provided
• Each register contains 32 bits
• Design Principle: smaller is faster. Why?
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 31
Registers vs. Memory
Processor I/O
Control
Datapath
Memory
Input
Output
• Arithmetic instructions operands must be registers, — only 32 registers provided
• Compiler associates variables with registers
• What about programs with lots of variables
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 32
Memory Organization
• Viewed as a large, single-dimension array, with an address.
• A memory address is an index into the array
• "Byte addressing" means that the index points to a byte of memory.
0123456...
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
8 bits of data
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 33
Memory Organization
• Bytes are nice, but most data items use larger "words"
• For MIPS, a word is 32 bits or 4 bytes.
• 232 bytes with byte addresses from 0 to 232-1
• 230 words with byte addresses 0, 4, 8, ... 232-4
• Words are alignedi.e., what are the least 2 significant bits of a word address?
048
12...
32 bits of data
32 bits of data
32 bits of data
32 bits of data
Registers hold 32 bits of data
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 34
Instructions
• Load and store instructions• Example:
C code:A[12] = h + A[8];
MIPS code: lw $t0, 32($s3)add $t0, $s2, $t0sw $t0, 48($s3)
• Can refer to registers by name (e.g., $s2, $t2) instead of number• Store word has destination last• Remember arithmetic operands are registers, not memory!
Can’t write: add 48($s3), $s2, 32($s3)
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 35
So far we’ve learned:
• MIPS— loading words but addressing bytes— arithmetic on registers only
• Instruction Meaning
add $s1, $s2, $s3 $s1 = $s2 + $s3sub $s1, $s2, $s3 $s1 = $s2 – $s3lw $s1, 100($s2) $s1 = Memory[$s2+100] sw $s1, 100($s2) Memory[$s2+100] = $s1
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 36
• Instructions, like registers and words of data, are also 32 bits long– Example: add $t1, $s1, $s2– registers have numbers, $t1=9, $s1=17, $s2=18
• Instruction Format:
000000 10001 10010 01000 00000 100000
op rs rt rd shamt funct
• Can you guess what the field names stand for?
Machine Language
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 37
• Consider the load-word and store-word instructions,– What would the regularity principle have us do?– New principle: Good design demands a compromise
• Introduce a new type of instruction format– I-type for data transfer instructions– other format was R-type for register
• Example: lw $t0, 32($s2)
35 18 9 32
op rs rt 16 bit number
• Where's the compromise?
Machine Language
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 38
• Instructions are bits
• Programs are stored in memory — to be read or written just like data
• Fetch & Execute Cycle– Instructions are fetched and put into a special register– Bits in the register "control" the subsequent actions– Fetch the “next” instruction and continue
Processor Memory
memory for data, programs, compilers, editors, etc.
Stored Program Concept
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 39
• Decision making instructions– alter the control flow,– i.e., change the "next" instruction to be executed
• MIPS conditional branch instructions:
bne $t0, $t1, Label beq $t0, $t1, Label
• Example: if (i==j) h = i + j;
bne $s0, $s1, Labeladd $s3, $s0, $s1Label: ....
Control
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 40
• MIPS unconditional branch instructions:j label
• Example:
if (i!=j) beq $s4, $s5, Lab1 h=i+j; add $s3, $s4, $s5else j Lab2 h=i-j; Lab1: sub $s3, $s4, $s5Lab2: ...
• Can you build a simple for loop?
Control
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 41
So far:
• Instruction Meaning
add $s1,$s2,$s3 $s1 = $s2 + $s3sub $s1,$s2,$s3 $s1 = $s2 – $s3lw $s1,100($s2) $s1 = Memory[$s2+100] sw $s1,100($s2) Memory[$s2+100] = $s1bne $s4,$s5,L Next instr. is at Label if $s4 ≠ $s5beq $s4,$s5,L Next instr. is at Label if $s4 = $s5j Label Next instr. is at Label
• Formats: op rs rt rd shamt funct op rs rt 16 bit address
op 26 bit address
R
I
J
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 42
• We have: beq, bne, what about Branch-if-less-than?
• New instruction:if $s1 < $s2 then
$t0 = 1 slt $t0, $s1, $s2 else
$t0 = 0
• Can use this instruction to build "blt $s1, $s2, Label" — can now build general control structures
• Note that the assembler needs a register to do this,— there are policy of use conventions for registers
Control Flow
EE472 – Spring 2007 P. Chiang, with Slide Help from C. Kozyrakis (Stanford)
Department of Electrical EngineeringOregon State University
http://eecs.oregonstate.edu/~pchiang
Policy of Use Conventions
Name Register number Usage$zero 0 the constant value 0$v0-$v1 2-3 values for results and expression evaluation$a0-$a3 4-7 arguments$t0-$t7 8-15 temporaries$s0-$s7 16-23 saved$t8-$t9 24-25 more temporaries$gp 28 global pointer$sp 29 stack pointer$fp 30 frame pointer$ra 31 return address
Register 1 ($at) reserved for assembler, 26-27 for operating system
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 44
• Small constants are used quite frequently (50% of operands) e.g., A = A + 5;B = B + 1;C = C - 18;
• Solutions? Why not?– put 'typical constants' in memory and load them. – create hard-wired registers (like $zero) for constants like one.
• MIPS Instructions:
addi $29, $29, 4slti $8, $18, 10andi $29, $29, 6ori $29, $29, 4
• Design Principle: Make the common case fast. Which format?
Constants
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 45
• We'd like to be able to load a 32 bit constant into a register
• Must use two instructions, new "load upper immediate" instruction
lui $t0, 1010101010101010
• Then must get the lower order bits right, i.e.,
ori $t0, $t0, 10101010101010101010101010101010 0000000000000000
0000000000000000 1010101010101010
1010101010101010 1010101010101010
ori
1010101010101010 0000000000000000
filled with zeros
How about larger constants?
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 46
• Assembly provides convenient symbolic representation– much easier than writing down numbers– e.g., destination first
• Machine language is the underlying reality– e.g., destination is no longer first
• Assembly can provide 'pseudoinstructions'– e.g., “move $t0, $t1” exists only in Assembly – would be implemented using “add $t0,$t1,$zero”
• When considering performance you should count real instructions
Assembly Language vs. Machine Language
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 47
• Discussed in your assembly language programming lab: support for procedures
linkers, loaders, memory layoutstacks, frames, recursionmanipulating strings and pointersinterrupts and exceptionssystem calls and conventions
• Some of these we'll talk more about later
• We’ll talk about compiler optimizations when we hit chapter 4.
Other Issues
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 48
• simple instructions all 32 bits wide
• very structured, no unnecessary baggage
• only three instruction formats
• rely on compiler to achieve performance— what are the compiler's goals?
• help compiler where we can
op rs rt rd shamt funct op rs rt 16 bit address
op 26 bit address
R
I
J
Overview of MIPS
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 49
• Instructions:bne $t4,$t5,Label Next instruction is at Label if $t4 ° $t5beq $t4,$t5,Label Next instruction is at Label if $t4 = $t5j Label Next instruction is at Label
• Formats:
• Addresses are not 32 bits — How do we handle this with load and store instructions?
op rs rt 16 bit address
op 26 bit addressI
J
Addresses in Branches and Jumps
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 50
• Instructions:bne $t4,$t5,Label Next instruction is at Label if $t4≠$t5beq $t4,$t5,Label Next instruction is at Label if $t4=$t5
• Formats:
• Could specify a register (like lw and sw) and add it to address– use Instruction Address Register (PC = program counter)– most branches are local (principle of locality)
• Jump instructions just use high order bits of PC – address boundaries of 256 MB
op rs rt 16 bit addressI
Addresses in Branches
EE472 – Spring 2007 P. Chiang, with Slide Help from C. Kozyrakis (Stanford)
Department of Electrical EngineeringOregon State University
http://eecs.oregonstate.edu/~pchiang
To summarize:MIPS operands
Name Example Comments$s0-$s7, $t0-$t9, $zero, Fast locations for data. In MIPS, data must be in registers to perform
32 registers $a0-$a3, $v0-$v1, $gp, arithmetic. MIPS register $zero always equals 0. Register $at is $fp, $sp, $ra, $at reserved for the assembler to handle large constants.
Memory[0], Accessed only by data transfer instructions. MIPS uses byte addresses, so
230 memory Memory[4], ..., sequential words differ by 4. Memory holds data structures, such as arrays,words Memory[4294967292] and spilled registers, such as those saved on procedure calls.
MIPS assembly languageCategory Instruction Example Meaning Comments
add add $s1, $s2, $s3 $s1 = $s2 + $s3 Three operands; data in registers
Arithmetic subtract sub $s1, $s2, $s3 $s1 = $s2 - $s3 Three operands; data in registers
add immediate addi $s1, $s2, 100 $s1 = $s2 + 100 Used to add constantsload word lw $s1, 100($s2) $s1 = Memory[$s2 + 100] Word from memory to registerstore word sw $s1, 100($s2) Memory[$s2 + 100] = $s1 Word from register to memory
Data transfer load byte lb $s1, 100($s2) $s1 = Memory[$s2 + 100] Byte from memory to registerstore byte sb $s1, 100($s2) Memory[$s2 + 100] = $s1 Byte from register to memoryload upper immediate lui $s1, 100 $s1 = 100 * 216 Loads constant in upper 16 bits
branch on equal beq $s1, $s2, 25 if ($s1 == $s2) go to PC + 4 + 100
Equal test; PC-relative branch
Conditional
branch on not equal bne $s1, $s2, 25 if ($s1 != $s2) go to PC + 4 + 100
Not equal test; PC-relative
branch set on less than slt $s1, $s2, $s3 if ($s2 < $s3) $s1 = 1; else $s1 = 0
Compare less than; for beq, bne
set less than immediate
slti $s1, $s2, 100 if ($s2 < 100) $s1 = 1; else $s1 = 0
Compare less than constant
jump j 2500 go to 10000 Jump to target addressUncondi- jump register jr $ra go to $ra For switch, procedure returntional jump jump and link jal 2500 $ra = PC + 4; go to 10000 For procedure call
EE472 – Fall 2007 P. Chiang with slides from C. Kozyrakis (Stanford)
Lecture 1 - 52
Byte Halfword Word
Registers
Memory
Memory
Word
Memory
Word
Register
Register
1. Immediate addressing
2. Register addressing
3. Base addressing
4. PC-relative addressing
5. Pseudodirect addressing
op rs rt
op rs rt
op rs rt
op
op
rs rt
Address
Address
Address
rd . . . funct
Immediate
PC
PC
+
+