Upload
abhijeetnawal
View
9
Download
1
Embed Size (px)
DESCRIPTION
Briefly introduces the features of NETBURST, Core and The Nehalem architecture of INTEL. Along with the Heterogeneous NVIDIA Tegra GPGPU
Citation preview
An Architecture Perspective On Modern Microprocessors And GPU
- Abhijeet Nawal
04/07/2023 1AN ARCHITECTURE PERSPECTIVE
Agenda
• INTRODUCTION• INTEL’S NETBURST ARCHITECTURE• INTEL’S CORE ARCHITECTURE• INTEL’S NEHALEM ARCHITECTURE• SNEAK PEAK AT NVIDIA TEGRA GPU• REFERENCES
04/07/2023 2AN ARCHITECTURE PERSPECTIVE
Introduction Super Scalar Homogeneous Processors From Intel. Performance = Frequency x IPC Power = Dynamic Capacitance x Volts x Volts x Frequency. Dynamic Capacitance is the ratio of the electrostatic charge
on a conductor to the potential difference between the conductors required to maintain that charge.
Higher the No Of Pipeline Stages more Instructions in Pipeline. Higher No Of Pipeline Stages reduces IPC as n/{k+(n-1)} . Low IPC is offset by increasing the clock rate and reducing
stage time. Each Instruction is CISC based so decodes into micro
operations.04/07/2023 3AN ARCHITECTURE PERSPECTIVE
Introduction… Streaming SIMD Extensions:
SSE instructions are 128-bit integer arithmetic and 128-bit SIMD double precision floating-point operations.
They reduce the overall number of instructions required to execute a particular program task.
They accelerate a broad range of applications, including video, speech and image, photo processing, encryption, financial, engineering and scientific applications.
Predecode phase: Before Instruction pipleline fetch and decode phase. Bundles instructions to be parallelly executed. Instructions are appended with bits after fetching from memory as
they enter the instruction cache. This unit also has to thus take care of analyzing the structural,
control and data hazards. 04/07/2023 AN ARCHITECTURE PERSPECTIVE 4
Intel Architectures: Netburst
NetBurst Core Nehalem
04/07/2023 5AN ARCHITECTURE PERSPECTIVE
NetBurst Architecture
04/07/2023 6AN ARCHITECTURE PERSPECTIVE
Netburst Microarchitecture
04/07/2023 7AN ARCHITECTURE PERSPECTIVE
Features of Netburst Architecture Hyper Threading:
A processor appears as two logical processors. Each logical processor has its own set of registers,
APIC( Advanced programmable interrupt controller). Increases resource utilization and improve performance.
Introduced SSE (Streaming SIMD Extensions)3.0 Added some DSP-oriented instructions . And some process (thread) management instructions.
04/07/2023 8AN ARCHITECTURE PERSPECTIVE
Features of Netburst… Hyper Pipelined Technology:
20 stage pipeline. Branch Mispredictions can lead to very costly
pipeline flushes. Techniques to hide stall penalties are parallel
execution, buffering and speculation. Three Major Components:
In-Order Issue Front End Out-Of-Order Superscalar Execution Core In-Order Retirement Unit
04/07/2023 AN ARCHITECTURE PERSPECTIVE 9
Features of Netburst… In-Order Issue Front End:
Two major parts: Fetch/Decode Unit Execution Trace Cache
Fetch/ Decode Unit: Prefetches IA-32 instructions that are likely to be
executed. Details in Prefetching. Fetches instructions that have not already been
prefetched. Decodes instructions into µops and builds trace.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 10
Features of Netburst… Execution Trace Cache:
Middle-man between First Decode Stage and Execution Stage
Caches the decoded micro operations of repeating instruction sequences avoiding re-decode.
Caches the branch targets and delivers µops to execution. Rapid Execution Engine:
Arithmetic Logic Units (ALUs) run at twice the processor frequency and thus offset the low IPC factor.
Basic integer operations executes in 1/2 processor clock tick.
Provides higher throughput and reduced latency of execution.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 11
Features of Netburst… Out of Order Core:
Contains multiple execution hardware resources to execute multiple µops parallel.
µops contending for a resource are buffered. Meanwhile other µops are executed. Dependency among µops is taken care by
appropriate buffering and in ordered retirement logic of retirement unit.
Register renaming logic aids to resolving conflicts. Up to three µops may be retired per cycle.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 12
Features of Netburst… The Branch Predictor :
Dynamically predict the target of a branch instruction based on its linear address using branch target buffer.
If none/invalid dynamic prediction is available, statically predicts based on the offset of the target.
A backward branch is predicted to be taken, a forward branch is predicted to be not taken.
Return addresses are predicted using the 16-entry return address stack.
It does not predict far transfers, for example, far calls, interrupt returns and software interrupts.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 13
Features of Netburst… Prefetching: By three techniques- Hardware Instruction Fetcher Prefetch Instructions Hardware to fetch data and instructions directly to
second level cache. Caching: Supports upto 3 levels of caches. All being exclusive. First Level: Separate data and instruction Cache and
trace cache.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 14
Heading to Core
NetBurst Core Nehalem
04/07/2023 15AN ARCHITECTURE PERSPECTIVE
Core Microachitecture
04/07/2023 16AN ARCHITECTURE PERSPECTIVE
Core Microarchitecture
04/07/2023 17AN ARCHITECTURE PERSPECTIVE
Features of Core Architecture Wide Dynamic Execution: Each Core is wider and can fetch, decode and
execute 4 instructions at a time. Netburst could however execute only 3. So a quad core processor executes 16 at once. It has added more simple decoder than Netburst. Decoders decoding x86 instructions: Simple: translating to one micro operation. Complex: translating to more than one micro op.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 18
Wide Dynamic Execution… Macrofusion: In previous generation processors, each incoming
instruction was individually decoded and executed. Macrofusion enables common instruction pairs
(such as a compare followed by a conditional jump) to be combined into a single internal instruction (micro-op) during decoding.
Increases the overall IPC and energy efficiency. The architecture uses an enhanced Arithmetic Logic
Unit (ALU) to support Macrofusion.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 19
Wide Dynamic Execution…
04/07/2023 AN ARCHITECTURE PERSPECTIVE 20
Advanced Digital Media Boost Netburst executed 128 bit SSE instructions in two cycles taking
64 bits at one cycle. Core executes one 128 bit SSE in 1 clock cycle.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 21
Smart Memory Access Memory disambiguation: Intelligent algorithms for identifying which loads are
independent of stores or are okay to load ahead of stores ensuring that no data location dependencies are violated.
If at all the load is invalid, then it detects the conflict, reloads the correct data and re-executes the instruction.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 22
Advanced Smart Cache Each execution core shares L2 cache instead of a
separate one for each core. The data only has to be stored in one place that each
core can access thereby optimizing cache resources. When one core has minimal cache requirements,
other cores can increase their percentage of L2 cache.
Load based sharing reduces cache misses and increasing performance.
Advantage is higher cache hit rate, reduced bus traffic and lower latency to data.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 23
Intelligent Power Capability
Manages the runtime power consumption of all the processor’s execution cores.
Includes an advanced power gating capability in which an ultra fine-grained logic control turns on individual processor logic subsystems only if and when they are needed.
Has many buses and arrays are split so that data required in some modes of operation can be put in a low power state when not needed.
Implementing power gating reduced the power footprint to a great extent compared to previous processors.
04/07/2023 24AN ARCHITECTURE PERSPECTIVE
Heading to Nehalem
NetBurst Core Nehalem
04/07/2023 25AN ARCHITECTURE PERSPECTIVE
Nehalem Architecture
04/07/2023 26AN ARCHITECTURE PERSPECTIVE
Quick Path Technology Turbo Boost Technology Hyper Threading Smarter Cache IPC Improvements Enhanced Branch Prediction Application Targeted Accelerators and SSE 4.0 Intelligent Power Technology Enhanced Virtualization Technology support Enhancements Over Core Microarchitecture
Nehalem Microarchitecture
04/07/2023 AN ARCHITECTURE PERSPECTIVE 27
Quick Path Technology Integrates a memory controller into each
microprocessor. Connects processors and other components
with a new high-speed interconnect. Scalable Architecture in which memory scales
with processors. Scalable Shared Memory Implementation
Support. Scalable Compute Architecture. Lower Memory Access Latency
04/07/2023 AN ARCHITECTURE PERSPECTIVE 28
04/07/2023 AN ARCHITECTURE PERSPECTIVE 29
04/07/2023 AN ARCHITECTURE PERSPECTIVE 30
Turbo Boost Technology Automatically allows active processor cores to run faster than
the base operating frequency. Turbo Boost for a given workload depends on:
Number of active cores Estimated current consumption Estimated power consumption Processor temperature
04/07/2023 AN ARCHITECTURE PERSPECTIVE 31
Hyper Threading in Nehalem
04/07/2023 AN ARCHITECTURE PERSPECTIVE 32
Smarter Cache A new Second-level Translation Look-aside Buffer: Has 512 entries. Improves the virtual to physical address translation
for a page and this in turn further saves memory clock cycles.
New three-level cache hierarchy: L1 (32 KB Instruction Cache, 32 KB 8-way Data Cache)
per core. L2 caches (256 KB 8-way)per core. L3 cache (8 MB 16-way) is shared among the cores. All caches are inclusive.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 33
04/07/2023 AN ARCHITECTURE PERSPECTIVE 34
L3 cache can be scaled in size based on the number of cores. A central queue acts as a crossbar and arbiter between the four
cores and the uncore region of Nehalem. Uncore includes the L3 cache, integrated memory controller and
QPI links. Each core supports up to 10 data cache misses and 16 total
outstanding misses.
IPC Improvements Increased size of the out-of-order window and
buffers. Improved implementation of instructions
enforcing synchronization like XCHG so that existing threaded software will see a performance boost.
Improved Hardware Prefetch and Better Load-Store Scheduling.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 35
IPC Improvements… Loop Stream Detector: First identifies repeating instruction sequences. Once identified the traditional branch prediction, fetch and decode
phases of execution are temporarily turned off while the loop executes.
This saves the cycles that might have been otherwise wasted in these pipeline stages due to repeated set of instructions.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 36
Enhanced Branch Prediction: New Second-Level Branch Target Buffer: To improve branch
predictions in large coded apps (e.g., database applications). New Renamed Return Stack Buffer: Stores forward and
return pointers associated with calls and returns. SSE 4.2: Introduces seven new SSE 4.2 instructions including four that
optimizes string and text processing. STTNI (String and text new instructions): Operate on 16 bytes at a time. This boosts the XML parsing speed and enables faster
search and pattern matching, lexing, tokenizing and regular expression evaluation.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 37
04/07/2023 AN ARCHITECTURE PERSPECTIVE 38
Intelligent Power Technology Integrated Power Gates: Allows independent individual idling of a core to non zero
power reducing idle power. Automated Low-Power States: Automatically put processor and memory into the lowest
available power states that will meet the requirement of the current workload.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 39
Enhanced Virtualization Quickpath enabled Scalable Shared memory: So hypervisor can pin Virtual machine to a specific execution
microprocessor and its dedicated memory. Hardware-assisted page-table management: Allows the guest OS more direct access to the hardware and reducing
compute intensive software translation from the hypervisor. Directed I/O: Speed data movement and eliminates much of the performance overhead
by giving designated virtual machines their own dedicated I/O devices, thus reducing the overhead of the VMM in managing I/O traffic.
Virtualized Connectivity: Integrating extensive hardware assists into the I/O devices Performing routing functions to and from virtual machines in dedicated
network silicon, it speeds delivery and reduces the load on the VMM and Server Processors.
Improves two times the throughput than non-hardware assisted devices.04/07/2023 AN ARCHITECTURE PERSPECTIVE 40
Enhancement Over Core Microarchitecture
Pipeline: 14 stage in core but 20 to 24 stages in Nehalem. Branch Prediction: advanced RSB and L2 Branch Predictor. Unified 2nd Level TLB: 512 entry L2 TLB against 256 entry of core. Macrofusion: Condenses 64-bit Macro-ops than 32-bits in Core. The Loop Stream Detection: More efficient in Nehalem. The Execution Engine and the Out of Order executor: The Reorder Buffer has been made a third larger- up from 96 to
128 Entries. The Reservation Station (which schedules operations to available
Execution Units) has been given an extra four slots allowing 36 Entries .
04/07/2023 AN ARCHITECTURE PERSPECTIVE 41
SNEAK PEAK AT NVIDIA TEGRA GPU
04/07/2023 42AN ARCHITECTURE PERSPECTIVE
KEY FEATURES Eight Processors Independently Power Managed
04/07/2023 43AN ARCHITECTURE PERSPECTIVE
KEY FEATURES… Graphics Processor
Rendering 3D Visuals, Gaming and Touch Interface Video Decode Processor
Macroblock Algorithms, VLD and Color Space Conversions for HD video Streaming and Playback
Video Encode Processor Video Encode Algorithms for HD Streaming
Recording
04/07/2023 AN ARCHITECTURE PERSPECTIVE 44
KEY FEATURES… Image Signal Processor
Light Balance, Edge Enhancement, And Noise Reduction for Real Time Photo Enhancements
Audio Processor Analog Signal Audio Processing
Dual-Core ARM Cortex A9 CPU For General-Purpose Computing eg. Web Surfing
ARM7 Processor System Management Functions like monitoring
Battery, Turning on/ off Processing Units.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 45
Key Features…
Each Processor is Optimized for Specific Tasks. Intelligent to Achieve Lowest Power Footprint. Multi tasking Loads handled by enabling
dedicated set of Processors. For Non Multi tasking Loads only the
processor most optimized for it is turned on and rest are powered off.
04/07/2023 AN ARCHITECTURE PERSPECTIVE 46
REFERENCES AND COURTESY Intel’s white papers Nvidia Tegra White Paper http://www.bit-tech.net http://www.trustedreviews.com/cpu-memory/revie
w/2008/11/03/Intel-Core-i7-Nehalem-Architecture-Overview
http://www.tomshardware.com/reviews/Intel-i7-nehalem-cpu 2041-3.html
www.wikipedia.com
04/07/2023 AN ARCHITECTURE PERSPECTIVE 47