20
MAHARANA PRATAP COLLEGE OF TECHNOLOGY Hyper-threading B.E.(508) Submitted To :- Sarika Tyagi Submitted By:- Anmol Purohit Depart. :- C.S. Roll No. :- 0903CS121019

Hyper threading

Embed Size (px)

Citation preview

MAHARANA PRATAP COLLEGE OF

TECHNOLOGY

Hyper-threading

B.E.(508)

Submitted To :- Sarika Tyagi Submitted By:- Anmol

Purohit

Depart. :- C.S. Roll No. :- 0903CS121019

TO BE TACKLED

• Introduction

• Hyper-Threading Concepts

• How Hyper Threading Works

• Implementing Hyper-threading

• Hyper-Threading Architecture

• Applications

• Advantages/Disadvantages

• Conclusion

• References

INTRODUCTION

• Hyper-Threading Technology brings the simultaneous multi-threading

approach to the Intel architecture that allows processors to work more

efficiently.

• This new technology enables the processor to execute two series, or

threads, of instructions at the same time, thereby

improving performance and system responsiveness while delivering

performance headroom for the future.

• Hyper-Threading Technology provides thread-level-parallelism (TLP) on

each processor resulting in increased utilization of processor and

execution resources.

• Hyper-Threading Technology provides two logical processors in a single

processor package.

HYPER-THREADING CONCEPT

• At each point of time only a part of processor resources is

used for execution of the program code.

• Unused resources can also be loaded, for example, with

parallel execution of another thread/application.

• Extremely useful in desktop and server applications where

many threads are used.

4

HOW HYPER THREADING WORKS

• A single processor supporting Hyper-Threading Technology presents

itself to modern operating systems and applications as two virtual

processors. The processor can work on two sets of tasks

simultaneously, use resources that otherwise would sit idle, and get

more work done in the same amount of time.

• HT Technology takes advantage of the multithreading capability that's

built in to Windows XP and many advanced applications. Multithreaded

software divides its workloads into processes and threads that can be

independently scheduled and dispatched. In a multiprocessor system,

those threads execute on different processors.

IMPLEMENTING HYPER-THREADING

• Replicated :- Register renaming logic, instruction pointer,

ITLB, return stack predictor, Various other architectural registers

• Partitioned :- Re-order buffers, load/store buffer, various

queues :scheduling queue, uop queue

• Shared :- Caches: Trace Cache, L1,L2,L3, Micro-

Architectural registers , Execution Units

REPLICATED RESOURCES

Necessary in order to maintain two fully independent contexts on

each logical processor.

The most obvious of these is the instruction pointer (IP), which is the

pointer that helps the processor keep track of its place in the

instruction stream by pointing to the next instruction to be fetched.

In order to run more than one process on the CPU, you need as

many IPs as there are instruction streams keep track of. Or,

equivalently, you could say that you need one IP for each logical

processor.

Similarly, the Xeon has two register allocation tables (RATs), each of

which handles the mapping of one logical processor's eight

architectural integer registers and eight architectural floating-point

registers onto a shared pool of 128 GPRs (general purpose registers)

and 128 FPRs (floating-point registers). So the RAT is a replicated

resource that manages a shared resource (the micro architectural

register file).

PARTITIONED RESOURCES

Statically partitioned queue

Each queue is split in half

It’s resources solely dedicated

to use of one logical processor

Dynamically partitioned queue

In a scheduling queue with 12

entries, instead of assigning

entries 0 through 5 to logical

processor 0 and entries 6

through 11 to logical processor

1, the queue allows any logical

processor to use any entry but

it places a limit on the number

of entries that any one logical

processor can use. So in the

case of a 12-entry scheduling

queue, each logical processor

can use no more than six of the

entries.

PARTITIONED RESOURCES

SHARED RESOURCES

• Shared resources are at the heart of hyper-threading; they're what makes the technique worthwhile.

• The more resources that can be shared between logical processors, the more efficient hyper-threading can be at squeezing the maximum amount of computing power out of the minimum amount of die space.

• A class of shared resources consists of the execution units: the integer units, floating-point units, and load-store unit.

• Hyper-threading's greatest strength--shared resources--also turns out to be its greatest weakness, as well.

• Problems arise when one thread monopolizes a crucial resource. The problem here is the exact same problem that we discussed with cooperative multi-tasking: one resource hog can ruin things for everyone else. Like a cooperative multitasking OS, the Xeon for the most part depends on each thread to play nicely and to refrain from monopolizing any of its shared resources.

HYPER-THREADING ARCHITECTURE

• First used in Intel Xeon MP processor

• Makes a single physical processor appear as multiple logical processors.

• Each logical processor has a copy of architecture state.

• Logical processors share a single set of physical execution resources

12

13

APPLICATION

• The Intel Xeon processor with Hyper-Threading Technology is well-

suited for servers and high-end scientific computing workstations, as

well as demanding applications such as graphics, multimedia, and

gaming.

• Business Benefits

BUSINESS BENEFITS OF HYPER-THREADING

TECHNOLOGY

• Higher transaction rates for e-Businesses

• Improved reaction and response times for end-users and customers.

• Increased number of users that a server system can support

• Handle increased server workloads

• Compatibility with existing server applications and operating systems

ADVANTAGES

• No performance loss if only one thread is active. Increased

performance with multiple threads

• Improved overall system performance

• Increased number of users a platform can support

• Improved throughput, because tasks run on separate threads

• Improved reaction and response time

• Increased number of transactions that can be executed

16

DISADVANTAGES

• To take advantage of hyper-threading performance, serial

execution can not be used.

• Threads are non-deterministic and involve extra design

• Threads have increased overhead

• Shared resource conflicts

17

CONCLUSION

• Intel’s Hyper-Threading Technology brings the concept of simultaneous

multi-threading to the Intel Architecture.

• It will become increasingly important going forward as it adds a new

technique for obtaining additional performance for lower transistor and

power costs.

• The goal was to implement the technology at minimum cost while

ensuring forward progress on logical processors, even if the other is

stalled, and to deliver full performance even when there is only one

active logical processor.

REFERENCES

• Intel Technology Journal. Volume 6 Issue 1. February 14, 2002

• Intel Hyper-Threading Technology Review

• www.digit-life.com/articles/pentium4xeonhyperthreading/

• HyperThreading Threads Its Way into Application

• http://www.tomshardware.com/cpu/20021227/

• Introduction to Multithreading, Superthreading and Hyperthreading

• http://www.arstechnica.com/paedia/h/hyperthreading/hyperthreading-1.html

19

Thank you